We host Docker containers on AWS infrastructure using AWS EKS. My reading so far shows that the kubectl command-line tool gives me commands to query and manipulate the EKS cluster. The aws eks command-line tool also gives me commands to do this. To my inexperienced eye, they look like they offer the same facilities.
Are there certain situations when it's better to use one or the other?
aws eks command is for interacting with AWS EKS proprietary APIs to perform administrative tasks such as creating cluster, updating kubeconfig with correct credentials etc.
kubectl is an open source ClI tool which let you interact with kubernetes API server to perform tasks such create pods, deployments etc.
You can not use aws eks command to interact with Kubernetes API Server and perform any kubernetes specific operations because it does not understand kubernetes APIs.
Similarly you can not use kubectl to interact with AWS EKS proprietary APIs because kubectl does not understand it.
Related
So I was able to connect to a GKE cluster from a java project and run a job using this:
https://github.com/fabric8io/kubernetes-client/blob/master/kubernetes-examples/src/main/java/io/fabric8/kubernetes/examples/JobExample.java
All I needed was to configure my machine's local kubectl to point to the GKE cluster.
Now I want to ask if it is possible to trigger a job inside a GKE cluster from a Google Cloud Function, which means using the same library https://github.com/fabric8io/kubernetes-client
but from a serverless environment. I have tried to run it but obviously kubectl is not installed in the machine where the cloud function runs. I have seen something like this working using a lambda function from AWS that uses https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/ecs/AmazonECS.html
to run jobs in an ECS cluster. We're basically trying to migrate from that to GCP, so we're open to any suggestion regarding triggering the jobs in the cluster from some kind of code hosted in GCP in case the cloud function can't do it.
Yes you can trigger your GKE method from a serverless environment. But you have to be aware to some edge cases.
With Cloud Functions, you don't manage the runtime environment. Therefore, you can't control what is installed on the container. And you can't control, or install Kubectl on it.
You have 2 solutions:
Kubernetes expose API. From your Cloud Functions you can simply call an API. Kubectl is an APIf call wrapper, nothing more! Of course, it requires more efforts, but if you want to stay on Cloud Functions you don't have any other choice
You can switch to Cloud Run. With Cloud Run, you can define your own container and therefore install Kubectl in it, in addition of your webserver (you have to wrap your Function in a webserver with Cloud Run, but it's pretty easy ;) )
Whatever the solution chosen, you also have to be aware of the GKE control plane exposition. If it is publicly exposed (generally not recommended), there is no issue. But, if you have a private GKE cluster, your control plane is only accessible from the internal network. To solve that with a serverless product, you have to create a serverless VPC connector to bridge the serverless Google-managed VPC with your GKE control-plane VPC.
I see 2 commands which can operate K8S cluster on GCP. One is gcloud container clusters and the other is kubectl.
Can anyone tell me what is the difference between them?
The main difference is that the gcloud container clusters command is primarily used for managing the allocation of resources for the cluster itself. E.g. it tells the Google Cloud Platform how to create, modify, and destroy the clusters supporting it. (Also important here are the gcloud container node-pools and gcloud container operations and gcloud container subnets commands).
It also has a key command: gcloud container clusters get-credentials which gives you the credentials you need to run the second command, kubectl.
kubectl, on the other hand, is the Kubernetes control command. It is used by all Kubernetes clusters, regardless of if they are on GCP, some other cloud provider, or manually set up on your own local hardware. It is primarily used for the manipulation of workloads (e.g. Pods, Deployments, StatefulSets, CronJobs etc) of the cluster itself, along with other configuration data (e.g. ConfigMaps, Secrets). It also allows for Kubernetes-native a administration of the cluster itself (e.g. granting cluster based roles to users, creating namespaces, etc).
Essentially, gcloud gives you the ability to provision and deprovision resources, whereas kubectl gives you the ability to use the clusters once provisioned.
More information:
gcloud container clusters reference and GKE Documentation.
kubectl reference and overview
From local machine, we can apply Kubernetes YAML files to AWS EKS using AWS CLI + aws-iam-authenticator + kubectl. How to do it in Ansible Tower / AWX?
Understand that there are a few Ansible modules available but none seems to be able to apply Kubernetes YAML to EKS.
k8s doesn't seem to support EKS at the moment.
aws_eks_cluster only allows user to manage EKS cluster (e.g. create, remove).
I think that you can possibly reach the goal via k8s module as it natively supports kubeconfig parameter which you can use for EKS cluster authentication. You can follow the steps described in the official documentation in order to compose kubeconfig file. There was a separate thread discussed on GitHub #45858 about kubernetes manifest file implementation through k8s module, however Git contributors were facing some authorization issue, therefore take a chance and look through the conversation maybe you will find some helpful suggestions.
What does AWS' Elastic Kubernetes Service (EKS) do exactly if so much configuration is needed in CloudFormation which is (yet) another AWS service?
I followed the AWS EKS Getting Started in the docs at (https://docs.aws.amazon.com/eks/latest/userguide/eks-ug.pdf) where it seems CloudFormation knowledge is heavily required to run EKS.
Am I mistaken or something?
So in addition to learning the Kubernetes .yaml manifest definitions, to run k8s on EKS, AWS expects you to learn their CloudFormation .yaml configuration manifests as well (which are all PascalCase as opposed to k8s' camelCase i might add)?
I understand that EKS does some management of latest version of k8s and control plane, and is "secure by default" but other than that?
Why wouldn't I just run k8s on AWS using kops then, and deal with the slightly outdated k8s versions?
Or am I supposed to do EKS + CloudFormation + kops at which point GKE looks like a really tempting alternative?
Update:
At this point I'm really thinking EKS is just a thin wrapper over CloudFormation after searching on EKS in detail and how it is so reliant on CloudFormation manifests.
Likely a business response to the alarming popularity of k8s, GKE in general with no substance to back the service.
Hopefully this helps save the time of anyone evaluating the half-baked service that is EKS.
To run Kubernetes on AWS you have basically 2 options:
using kops, it will create Master nodes + workers node under the hood, in plain EC2 machines
EKS + Cloudformation workers stack (you can use also Terraform as an alternative to deploy the workers, or eksctl, that will create both the EKS cluster and the workers. I recommend you to follow this workshop)
EKS alone provides only the master nodes of a kubernetes cluster, in a highly available setup. You still need to add the worker nodes, where your containers will be created.
I tried both kops and EKS + Workers, and I ended up using EKS, because I found it easier to setup and maintain and more fault-tolerant.
I feel the same difficulties earlier, and none of article could give me requirement in a glance for things that need to be done. Lot of people just recommend using eksctl which in my opinion will create a bloated and hard to manage kind of CloudFormation.
Basically both EKS is just a wrapper of Kubernetes, there's some points of integration between Kubernetes and AWS that still need to be done manually.
I've wrote an article that hope could help you understand all the process that need to be inplaces
EKS is the managed control plane for kubernetes , while Cloud-formation is a infrastructure templating service .
Instead of EKS you can run and manage the control plane(master nodes) on top of EC2 machines if you want to optimize for costs.For using EKS you have to pay for the underlying infra(EC2+networking..) and managed service fee(EKS price) .
Cloud-formation provides a nice interface to template and automate your infrastructure.You may use terraform in place of CF
I came across an open source Kubernetes project KOPS and AWS Kubernetes service EKS. Both these products allow installation of a Kubernetes cluster. However, I wonder why one would pick EKS over KOPS or vice versa if one has not run any of them earlier.
This question does not ask which one is better, but rather asks for a comparison.
The two are largely the same, at the time of writing, the following are the differences I'm aware of between the 2 offerings
EKS:
Fully managed control plane from AWS - you have no control over the masters
AWS native authentication IAM authentication with the cluster
VPC level networking for pods meaning you can use things like security groups at the cluster/pod level
kops:
Support for more Kubernetes features, such as API server options
Auto provisioned nodes use the built in kops node_up tool
More flexibility over Kubernetes versions, EKS only has a few versions available right now
Other significant difference is that EKS is an AWS product so you require an AWS account but kops allows to run Kubernetes in AWS but also in GCE and DigitalOcean.