Is there a way to point my machine's kubectl to a K8s cluster hosted on EC2 Instances? - amazon-web-services

I want to point my machine's kubectl to a cluster running on EC2 instances.
When using EKS, it's fairly easy, we just need to run:
aws eks --region=<REGION> update-kubeconfig --name <CLUSTER_NAME>
However, how can I get the same behaviour if my master-nodes and worker-nodes are hosted on EC2 instances directly?

Firstly download your cluster config from master node to your machine. Then you need to merge downloaded config with your existing config on your local.
This post will help you how to merge them.
Then you can manage your configs with kubectx.

Related

How can I give access to statping deployed outside k8s cluster to monitor k8s services uptime?

I want statping to be independent of the infra it is monitoring. But I want to check the services uptime which are on clusterIP inside the k8s EKS cluster. Will setting up kubeconfig on the EC2 instance help ?
There are multiple ways to access Kubernetes Services from the statping EC2 Instance.
All of them are discussed in https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/
https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#so-many-proxies
kubectl proxy https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#using-kubectl-proxy is a good option for your use case if you already have kubeconfig on the statping EC2 Instance.
You can use https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#manually-constructing-apiserver-proxy-urls to construct the Proxy URLs.

Acess S3 from a pod in Kubernetes Cluster

I am new to Kubernetes and I am transitioning some of the apps to K8S Cluster.
I have tremendous use of S3 in the containers that I used through Roles in AWS.
I have configured a 2 node cluster using kubeadm using Ec2 instances(not EKS).
But I am stuck as whenever I run the container through pods I get error:
**Could not connect to the endpoint URL:"https://<bucket_name>.s3.amazonaws.com/**
I have IAM roles attached to the Ec2 instances that are configured as master and nodes.
Please suggest the best way to establish S3 connection through pods.
Any document/gitrepo link will be highly appreciated. Thanks in advance.

AWS EKS no exposed ports

So I've been struggling with the fact that I'm unable to expose any deployment in my eks cluster.
I got down to this:
My LoadBalancer service public IP never responds
Went to the load balancer section in my aws console
Load balancer is no working because my cluster node is not passing the heath checks
SSHd to my cluster node and found out that containers do not have ports associated to them:
This makes the cluster node fail the health checks, so no traffic is forwarded that way.
I tried running a simple nginx container manually, without kubectl directly in my cluster node:
docker run -p 80:80 nginx
and pasting the node public IP in my browser. No luck:
then I tried curling to the nginx container directly from the cluster node via ssh:
curl localhost
And I'm getting this response: "curl: (7) Failed to connect to localhost port 80: Connection refused"
Why are containers in the cluster node not showing ports?
How can I make the cluster node pass the load balancer health checks?
Could it have something to do with the fact that I created a single node cluster with eksctl?
What other options do I have to easily run a kubernetes cluster in AWS?
This is something in the middle between answer and question, but I hope it will help you.
Im using Deploying a Kubernetes Cluster with Amazon EKS guide for years when it comes to create EKS cluster.
For test purposes, I just spinned up new cluster and it works as expected, including accessing test application using external LB ip and passing health checks...
In short you need:
1. create EKS role
2. create VPC to use in EKS
3. create stack (Cloudformation) from https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2019-01-09/amazon-eks-vpc-sample.yaml
4. Export variables to simplify further cli command usage
export EKS_CLUSTER_REGION=
export EKS_CLUSTER_NAME=
export EKS_ROLE_ARN=
export EKS_SUBNETS_ID=
export EKS_SECURITY_GROUP_ID=
5. Create cluster, verify its creation and generating appropriate config.
#Create
aws eks --region ${EKS_CLUSTER_REGION} create-cluster --name ${EKS_CLUSTER_NAME} --role-arn ${EKS_ROLE_ARN} --resources-vpc-config subnetIds=${EKS_SUBNETS_ID},securityGroupIds=${EKS_SECURITY_GROUP_ID}
#Verify
watch aws eks --region ${EKS_CLUSTER_REGION} describe-cluster --name ${EKS_CLUSTER_NAME} --query cluster.status
#Create .kube/config entry
aws eks --region ${EKS_CLUSTER_REGION} update-kubeconfig --name ${EKS_CLUSTER_NAME}
Can you please check article and confirm you havent missed any steps during installation?

How to deploy an application on GKE private cluster with terraform?

****I have made a bastion host VM(to be used as the master authorized network in private cluster) and a private cluster with Terraform which works fine.**** Now to deploy an application on the private cluster manually what we do is SSH into that bastion host VM first and then connect to the private cluster and then run the kubectl apply (deploy command) to deploy so how we can do this deployment procedure with Terraform script in GCP? Can anyone please help as I couldn't find the right example for doing this in GCP?
Instead of ssh your master machine, you can - for example - just use Ansible. First you need to configure Ansible to access the machine. Then you can run your Ansible scripts which contain the kubectl commands for deployment.
Preferably, you should use multiple Ansible roles to split your services deployment, then you can manage everything with a main Ansible Playbook.
In addition, Ansible scripts can be hosted and integrated into a CI-CD server / tool like Gitlab CI or Jenkins and at the end of the day, you deploy your services on Kubernetes via your CI CD pipeline.

Does AWS support Kubernetes?

I've read that AWS does not support Kubernetes and builds their own Docker orchestration engine EC2 Container Service. However, on Kubernetes getting-started -page there is a guide on how to run Kubernetes on AWS:
https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/aws.md
Which is right?
You can install Kubernetes on a normal Amazon EC2 server.
The new container service is a separate offering by Amazon, called ECS.
EDIT: AWS released in 2018 a new container service for Kubernetes called EKS: https://aws.amazon.com/eks/
Amazon Elastic Container Service for Kubernetes (Amazon EKS) makes it easy to deploy, manage, and scale containerized applications using Kubernetes on AWS. Amazon EKS runs the Kubernetes management infrastructure for you across multiple AWS availability zones to eliminate a single point of failure.
Kubernetes provides scripts to simple setup a cluster on a set of EC2 machines. The setup does pretty much everything needed to get started quickly.
Here is the link: https://github.com/GoogleCloudPlatform/kubernetes/blob/release-1.0/docs/getting-started-guides/aws.md
Yes its possible to setup Kubernetes on AWS See:http://kubernetes.io/v1.0/docs/getting-started-guides/aws.html
You can also manually setup Kubernetes on AWS by launching a EC2 instance
Foe setting in Redhat ami https://access.redhat.com/articles/1353773
(Note: Kubernetes needs flannel network to be setup for managing networking between docker containers running on different hosts(minions)
Amazons Container Service is unrelated to Kubernetes.
There are 3 main options for installing Kubernetes on AWS:
CoreOS have a cli for installing and managing kubernetes on aws: https://coreos.com/kubernetes/docs/latest/kubernetes-on-aws.html
Kubernetes have some scripts for setting up a cluster on AWS: http://kubernetes.io/docs/getting-started-guides/aws/
Manual installation on EC2. Lots of options here: http://kubernetes.io/docs/getting-started-guides/#cloud
As an aside minikube is now a thing which is nice for running locally to try stuff out:
http://kubernetes.io/docs/getting-started-guides/minikube/
AWS recently launched EKS, which provides a managed k8s master nodes. This should be
what you are looking for.
Yes. You can use kubeadm to install kubernetes on EC2 instances.
There are other tools also available.
KOPS
EKS
Kubeadm