kubernetes: kops and IAMFullAccess policy - amazon-web-services

According to documentation of both kops and aws, the dedicated kops user needs IAMFullAccess permission to operate properly.
Why is this permission needed?
Is there a way to avoid (i.e. restrict) this, given that it is a bit too intrusive to create a user with such a permission?
edit: one could assume that the specific permission is needed to attach the respective roles to the master(s) and node(s) instances;
therefore perhaps the question / challenge becomes how to:
not use IAMFullAccess
sync with the node creation / bootstrapping process and attach the above roles; (perhaps create a cluster on pre-configured instances? - no idea if kops provides for that)

As far as I understand kops design, it's meant to be end to end tool for provisioning you with k8s clusters. If you want to provision your nodes separately and deploy k8s on them I would suggest to use other tool, such as kubespray or kubeadm:
https://github.com/kubernetes-incubator/kubespray
https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/

Related

Attach IAM role to multiple EC2 instances

There seems to be plenty of documentation that outlines making a role with its corresponding policies and then attaching that to a new or pre-existing (single) EC2 instance. However, when you have many instances and the task it to attach a role to all of those instances, I can't find or figure a way that avoid doing the process one-by-one.
So, how does one attach an IAM role to multiple already-launched EC2 instances efficiently?
You'd have to do this one by one. It would generally be attached at launch but you can do it afterwards.
Programatically looping would probably be the most efficient
There is no way to bulk-assign roles to EC2 instances.
You can do this programmatically using the CLI or the SDK in your language of choice.
If using the CLI you'll want to use the ec2 associate-iam-instance-profile command. Note that this command still just accepts a single instance identifier at a time so you'll need to iterate through a list of instances and invoke repeatedly.

Change the horizontal-pod-autoscaler-sync-period in EKS

I have a k8s cluster deployed in AWS's EKS and I want to change horizontal-pod-autoscaler-sync-period from the default 30s value.
How can I change this flag?
Unfortunately you are not able do this on GKE, EKS and other managed clusters.
In order to change/add flags in kube-controller-manager - you should have access to your /etc/kubernetes/manifests/ dir on master node and be able to modify parameters in /etc/kubernetes/manifests/kube-controller-manager.yaml.
GKE, EKS and similar clusters manages only by their providers without getting you permissions to have access to master nodes.
Similar questions:
1) horizontal-autoscaler-in-a-gke-cluster
2) change-the-horizontal-pod-autoscaler-sync-period-with-gke
As a workaround you can create cluster using kubeadm init and configure/change it in any way you want.

What exactly does EKS do if CloudFormation is needed?

What does AWS' Elastic Kubernetes Service (EKS) do exactly if so much configuration is needed in CloudFormation which is (yet) another AWS service?
I followed the AWS EKS Getting Started in the docs at (https://docs.aws.amazon.com/eks/latest/userguide/eks-ug.pdf) where it seems CloudFormation knowledge is heavily required to run EKS.
Am I mistaken or something?
So in addition to learning the Kubernetes .yaml manifest definitions, to run k8s on EKS, AWS expects you to learn their CloudFormation .yaml configuration manifests as well (which are all PascalCase as opposed to k8s' camelCase i might add)?
I understand that EKS does some management of latest version of k8s and control plane, and is "secure by default" but other than that?
Why wouldn't I just run k8s on AWS using kops then, and deal with the slightly outdated k8s versions?
Or am I supposed to do EKS + CloudFormation + kops at which point GKE looks like a really tempting alternative?
Update:
At this point I'm really thinking EKS is just a thin wrapper over CloudFormation after searching on EKS in detail and how it is so reliant on CloudFormation manifests.
Likely a business response to the alarming popularity of k8s, GKE in general with no substance to back the service.
Hopefully this helps save the time of anyone evaluating the half-baked service that is EKS.
To run Kubernetes on AWS you have basically 2 options:
using kops, it will create Master nodes + workers node under the hood, in plain EC2 machines
EKS + Cloudformation workers stack (you can use also Terraform as an alternative to deploy the workers, or eksctl, that will create both the EKS cluster and the workers. I recommend you to follow this workshop)
EKS alone provides only the master nodes of a kubernetes cluster, in a highly available setup. You still need to add the worker nodes, where your containers will be created.
I tried both kops and EKS + Workers, and I ended up using EKS, because I found it easier to setup and maintain and more fault-tolerant.
I feel the same difficulties earlier, and none of article could give me requirement in a glance for things that need to be done. Lot of people just recommend using eksctl which in my opinion will create a bloated and hard to manage kind of CloudFormation.
Basically both EKS is just a wrapper of Kubernetes, there's some points of integration between Kubernetes and AWS that still need to be done manually.
I've wrote an article that hope could help you understand all the process that need to be inplaces
EKS is the managed control plane for kubernetes , while Cloud-formation is a infrastructure templating service .
Instead of EKS you can run and manage the control plane(master nodes) on top of EC2 machines if you want to optimize for costs.For using EKS you have to pay for the underlying infra(EC2+networking..) and managed service fee(EKS price) .
Cloud-formation provides a nice interface to template and automate your infrastructure.You may use terraform in place of CF

How to expand Kubernetes node instance profiles created by kops?

I'm running a Kubernetes cluster on AWS, managed with kops. Now I want to run external-dns, which requires additional permissions in the nodes instance role. My question is: what is the best way to make these changes?
I could edit the role manually in AWS, but I want to automate my setup. I could also edit the role through the API (using the CLI, Cloudformation, Terraform, etc), but then I have a two-phase setup which seems fragmented and inelegant. Ideally I'd want to tell kops about my additional needs, and have it manage those with the ones it manages itself. Is there any way to do this?

KOPS over AWS EKS or vice versa

I came across an open source Kubernetes project KOPS and AWS Kubernetes service EKS. Both these products allow installation of a Kubernetes cluster. However, I wonder why one would pick EKS over KOPS or vice versa if one has not run any of them earlier.
This question does not ask which one is better, but rather asks for a comparison.
The two are largely the same, at the time of writing, the following are the differences I'm aware of between the 2 offerings
EKS:
Fully managed control plane from AWS - you have no control over the masters
AWS native authentication IAM authentication with the cluster
VPC level networking for pods meaning you can use things like security groups at the cluster/pod level
kops:
Support for more Kubernetes features, such as API server options
Auto provisioned nodes use the built in kops node_up tool
More flexibility over Kubernetes versions, EKS only has a few versions available right now
Other significant difference is that EKS is an AWS product so you require an AWS account but kops allows to run Kubernetes in AWS but also in GCE and DigitalOcean.