What exactly does EKS do if CloudFormation is needed? - amazon-web-services

What does AWS' Elastic Kubernetes Service (EKS) do exactly if so much configuration is needed in CloudFormation which is (yet) another AWS service?
I followed the AWS EKS Getting Started in the docs at (https://docs.aws.amazon.com/eks/latest/userguide/eks-ug.pdf) where it seems CloudFormation knowledge is heavily required to run EKS.
Am I mistaken or something?
So in addition to learning the Kubernetes .yaml manifest definitions, to run k8s on EKS, AWS expects you to learn their CloudFormation .yaml configuration manifests as well (which are all PascalCase as opposed to k8s' camelCase i might add)?
I understand that EKS does some management of latest version of k8s and control plane, and is "secure by default" but other than that?
Why wouldn't I just run k8s on AWS using kops then, and deal with the slightly outdated k8s versions?
Or am I supposed to do EKS + CloudFormation + kops at which point GKE looks like a really tempting alternative?
Update:
At this point I'm really thinking EKS is just a thin wrapper over CloudFormation after searching on EKS in detail and how it is so reliant on CloudFormation manifests.
Likely a business response to the alarming popularity of k8s, GKE in general with no substance to back the service.
Hopefully this helps save the time of anyone evaluating the half-baked service that is EKS.

To run Kubernetes on AWS you have basically 2 options:
using kops, it will create Master nodes + workers node under the hood, in plain EC2 machines
EKS + Cloudformation workers stack (you can use also Terraform as an alternative to deploy the workers, or eksctl, that will create both the EKS cluster and the workers. I recommend you to follow this workshop)
EKS alone provides only the master nodes of a kubernetes cluster, in a highly available setup. You still need to add the worker nodes, where your containers will be created.
I tried both kops and EKS + Workers, and I ended up using EKS, because I found it easier to setup and maintain and more fault-tolerant.

I feel the same difficulties earlier, and none of article could give me requirement in a glance for things that need to be done. Lot of people just recommend using eksctl which in my opinion will create a bloated and hard to manage kind of CloudFormation.
Basically both EKS is just a wrapper of Kubernetes, there's some points of integration between Kubernetes and AWS that still need to be done manually.
I've wrote an article that hope could help you understand all the process that need to be inplaces

EKS is the managed control plane for kubernetes , while Cloud-formation is a infrastructure templating service .
Instead of EKS you can run and manage the control plane(master nodes) on top of EC2 machines if you want to optimize for costs.For using EKS you have to pay for the underlying infra(EC2+networking..) and managed service fee(EKS price) .
Cloud-formation provides a nice interface to template and automate your infrastructure.You may use terraform in place of CF

Related

Pulumi EKS cluster: #pulumi/eks vs. #pulumi/aws

I'm trying to create an AWS EKS cluster with Pulumi and it seems two components exists:
#pulumi/eks providing a Cluster component
#pulumi/aws providing an eks/Cluster component
#pulumi/eks seems to be higher level but I cannot find a documentation specifying the concrete difference between those, and if one is preferred depending on use cases.
What's the difference between those two components?
#pulumi/eks/Cluster is a component resource that is built on top of #pulumi/aws/eks/Cluster and other resources to simplify provisioning of EKS clusters. Its goal is to make common scenarios achievable with a handful of lines of code, as opposed to the involved model of raw AWS resources.
You can find some usage examples in
AWS Crosswalk: AWS Elastic Kubernetes Service
Easily Create and Manage AWS EKS Kubernetes Clusters with Pulumi.
I suggest you start with #pulumi/eks and see if it works well for you.

What is the best way to set up a CI/CD pipeline on ECS?

There are so many options:
Docker-compose with ECS cli looks the easiest solution
Terraform
CloudFormation (looks complex!)
Ansible
I am only interested in setting up a basic ECS docker set-up with ELB and easily updating the Docker image version.
We all love technology here, but we're not all super geniuses when it comes to tech. So I'm looking to keep my set-up as simple as possible. We run Jenkins, 2 NodeJS applications, 2 Java applications in ECS and I know it involves IAM, Security Groups, EBS, ELB, ECS Service/Task, ECS Task Definition, but that already gets complex quickly in CloudFormation.
What are good technologies that will allow us to use Docker, keep things simple and don't require us to be very intelligent to understand our own programming code?
I would suggest you start by trying to setup your pipeline using Terraform. Learning it will give you experience in a non-vendor specific infrastructure as code.
Another possibility is to avoid using CloudFormation directly and prefer using the AWS CDK (https://docs.aws.amazon.com/cdk/latest/guide/home.html) as IaC.
Best regards

Difference between AWS EKS and ECS Fargate

I used ECS Fargate and it provides containerization, auto-scaling based on request count, CPU and Memory.
It is working as expected.
I start to explore the AWS EKS feature and I didn't see any advantage in using this as all are provided by ECS Fargate.
Could someone help me understand where to use ECS Fargate and Where to use AWS EKS?
Anyhelp is appreciated.
Thanks,
Harry
You would use AWS EKS if you want to use Kubernetes.
Since Kubernetes is a standard, you could in theory move your application from AWS EKS to other cloud providers like Azure, Google Cloud, or DigitalOcean easily since they all support Kubernetes.
If you don't care about Kubernetes then I find that AWS ECS with the AWS Fargate [Serverless compute for containers] deployment type is currently the easiest method of running Docker containers on AWS.
Note that Amazon is actively working on adding the Fargate deployment type to the EKS service.
I would check back after the AWS re:invent conference next month to see how things have changed in this area.
We hear these questions often and I tried to capture some of the core principles of these comparisons/positioning in this blog post.

deriving re-useable terraform configuration from kops

I manage an established AWS ECS application with terraform. The terraform also manages all other aspects of each of 4 AWS environments, including VPCs, subnets, bastion hosts, RDS databases, security groups and so on.
We manage our 4 environments by putting all the common configuration in modules which are parameterised with variables derived from the environment specific terraform files.
Now, we are trying to migrate to using Kubernetes instead of Amazon ECS for container orchestration and I am trying to do this incrementally rather than with a big bang approach. In particular, I'd like to use terraform to provision the Kubernetes cluster and link it to the other AWS resources.
What I'd initially hoped to do was capture the terraform output from kops create cluster, generalise it by parameterising it with environment specific variables and then use this one kubernetes module across all 4 environments.
However, I now realise this isn't going to work because the k8s nodes and masters all reference the kops state bucket (in s3) and it seems like I am going to have clone that bucket and rewrite the files contained therein. This seems like a rather fragile way to manage the kubernetes environment - if I recreate the terraform environment, the related state kops state bucket is going to be inconsistent with the AWS environment.
It seems to me that kops generated terraform may be useful for managing a single instance of an environment, but it isn't easily applied to multiple environments - you effectively need one kops generated terraform per environment and there is noway to reuse the terraform to establish a new environment - for this you must fall back from a declarative approach and resort to an imperative kops create cluster command.
Am I missing a good way to manage the definition of multiple similar kubernetes environments with a single terraform module?
I'm not sure how you reached either conclusions, Terraform (which will cause more trouble than it will ever solve, that's definitely a tool to get rid of asap), or having to duplicate S3 buckets.
I'm pretty sure you'd be interested in kops's cluster template feature.
You won't need to generate, hack, and launch (and debug...) Terraform, and kops templates are just as easy if not significantly easier (and more specific...) to maintain than Terraform.
When kops releases new versions, you won't have to re-generate and re-hack your Terraform scripts either!
Hope this helps!

KOPS over AWS EKS or vice versa

I came across an open source Kubernetes project KOPS and AWS Kubernetes service EKS. Both these products allow installation of a Kubernetes cluster. However, I wonder why one would pick EKS over KOPS or vice versa if one has not run any of them earlier.
This question does not ask which one is better, but rather asks for a comparison.
The two are largely the same, at the time of writing, the following are the differences I'm aware of between the 2 offerings
EKS:
Fully managed control plane from AWS - you have no control over the masters
AWS native authentication IAM authentication with the cluster
VPC level networking for pods meaning you can use things like security groups at the cluster/pod level
kops:
Support for more Kubernetes features, such as API server options
Auto provisioned nodes use the built in kops node_up tool
More flexibility over Kubernetes versions, EKS only has a few versions available right now
Other significant difference is that EKS is an AWS product so you require an AWS account but kops allows to run Kubernetes in AWS but also in GCE and DigitalOcean.