I want to point my machine's kubectl to a cluster running on EC2 instances.
When using EKS, it's fairly easy, we just need to run:
aws eks --region=<REGION> update-kubeconfig --name <CLUSTER_NAME>
However, how can I get the same behaviour if my master-nodes and worker-nodes are hosted on EC2 instances directly?
Firstly download your cluster config from master node to your machine. Then you need to merge downloaded config with your existing config on your local.
This post will help you how to merge them.
Then you can manage your configs with kubectx.
I am using AWS ECS to automatically deploy my server in a docker container to my EC2 instance, the only problem is I have to use an elastic load balancer (ELB). This is a for a school project but it also uses a Telegram bot so I needed a HTTPS endpoint to receive updates from Telegram. An ELB is completely overkill for this and is also costing me more than I would like considering everything else is under the free tier that I am using. Does anyone know how to set up automatic deployment of a docker container to EC2 without an ELB/ECS OR does anyone know if it is possible to SSH to an EC2 instance during a build since that could possibly be a solution of how to run a deployment script on the instance automatically from the build. Thanks!
You dont need ECS.to run Docker. I have run Docker containers from an EC2 userdata script, so that is does a docker run command at launch. Works great.
My company uses a Bastion to connect to our private VPC which is where I want to create my Docker swarm. I normally connect to any EC2 instance in our vpc using my ~/.ssh/config file which redirects the ssh to our bastion host.
The problem is that docker-machine doesn't seem to have the capabilities to support this as indicated in https://github.com/docker/machine/pull/3410. Has anyone found a workaround for this or do I just use a different way to deploy swarm? Any suggestions if that is the case?
We preferred to split responsibilities of creating the environment to terraform since it can do much more than just instantiate the machines - like create the whole networking layer and cloud services - and then deploy the docker swarm inside it, in our case behind a bastion host as well.
The deploy overview is like this:
describe the cloud landscape with terraform
describe the initial provisioning of the machines with ansible:
provision the bastion host to perform NAT/ssh/http reverse proxy to the inner hosts
provision the inner hosts by installing docker, initing the swarm managers and joining the workers
deploy the swarm stack in the inner hosts
Not saying this is the only way, but it is what worked in our scenario. Basically skipping docker machine altogether.
I have been working on a React web app and I need to deploy it now. I have the codebase in GitLab and I'm using Gitlab pipeline to run the tests and create build and deploy. For deployment I need to deploy it to a EC2 instance. My pipeline runs well until creating the build. Now the problem is how to push that created build to the EC2 instance. Can someone help me in here. I tried in the following way.
Gitlab CI deploy AWS EC2
It showed me connection time out message instead of connecting to ec2 instance. After that I allowed all IPs to access the instance with SSH using the security groups. Then it worked fine for me. But the problem is it's not secure to allow all IPs to access SSH. How can I solve this problem.
I've read that AWS does not support Kubernetes and builds their own Docker orchestration engine EC2 Container Service. However, on Kubernetes getting-started -page there is a guide on how to run Kubernetes on AWS:
https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/aws.md
Which is right?
You can install Kubernetes on a normal Amazon EC2 server.
The new container service is a separate offering by Amazon, called ECS.
EDIT: AWS released in 2018 a new container service for Kubernetes called EKS: https://aws.amazon.com/eks/
Amazon Elastic Container Service for Kubernetes (Amazon EKS) makes it easy to deploy, manage, and scale containerized applications using Kubernetes on AWS. Amazon EKS runs the Kubernetes management infrastructure for you across multiple AWS availability zones to eliminate a single point of failure.
Kubernetes provides scripts to simple setup a cluster on a set of EC2 machines. The setup does pretty much everything needed to get started quickly.
Here is the link: https://github.com/GoogleCloudPlatform/kubernetes/blob/release-1.0/docs/getting-started-guides/aws.md
Yes its possible to setup Kubernetes on AWS See:http://kubernetes.io/v1.0/docs/getting-started-guides/aws.html
You can also manually setup Kubernetes on AWS by launching a EC2 instance
Foe setting in Redhat ami https://access.redhat.com/articles/1353773
(Note: Kubernetes needs flannel network to be setup for managing networking between docker containers running on different hosts(minions)
Amazons Container Service is unrelated to Kubernetes.
There are 3 main options for installing Kubernetes on AWS:
CoreOS have a cli for installing and managing kubernetes on aws: https://coreos.com/kubernetes/docs/latest/kubernetes-on-aws.html
Kubernetes have some scripts for setting up a cluster on AWS: http://kubernetes.io/docs/getting-started-guides/aws/
Manual installation on EC2. Lots of options here: http://kubernetes.io/docs/getting-started-guides/#cloud
As an aside minikube is now a thing which is nice for running locally to try stuff out:
http://kubernetes.io/docs/getting-started-guides/minikube/
AWS recently launched EKS, which provides a managed k8s master nodes. This should be
what you are looking for.
Yes. You can use kubeadm to install kubernetes on EC2 instances.
There are other tools also available.
KOPS
EKS
Kubeadm