How do I create an EKS cluster with nodes via CDK? - amazon-web-services

I'm able to deploy a Kubernetes Fargate cluster via CDK on my desired VPC:
const vpc = ec2.Vpc.fromLookup(this, 'vpc', {
vpcId: 'vpc-abcdefg'
})
const cluster = new eks.FargateCluster(this, 'sample-eks', {
version: eks.KubernetesVersion.V1_21,
vpc,
})
cluster.addNodegroupCapacity('node-group-capacity', {
minSize: 2,
maxSize: 2,
})
However, there are no nodes within this cluster:
$ kubectl config get-clusters
NAME
minikube
arn:aws:eks:us-east-1:<account_number>:cluster/<cluster_name>
$ kubectl get nodes
No resources found
Very confused as to why this is happening, as I thought the addNodegroupCapacity method is supposed to add nodes to the cluster. I think I can add nodes post-hoc via eksctl, but I was wondering if it'd be possible to deploy with nodes via CDK.

My mistake was not adding a role/user with sufficient permissions to the aws-auth ConfigMap. This meant that the cluster did not have proper permissions to create nodes. The following fixed my issue:
const role = iam.Role.fromRoleName(this, 'admin-role', '<my-admin-role>');
cluster.awsAuth.addRoleMapping(role, { groups: [ 'system:masters' ]});
The <my-admin-role> argument is the name of the role that I assume when I log in to AWS. I found it by running aws sts get-caller-identity, which returns a JSON doc that provides your assumed role's ARN. For me it was arn:aws:sts::<account-number>:assumed-role/<my-admin-role>/<my-username>.
This also resolved another issue, as I was not able to interact with the cluster via kubectl. I would get the following error message: error: You must be logged in to the server (Unauthorized). Adding my assumed role to the aws-auth ConfigMap gave me permission to access the cluster via my terminal.
Not sure why I haven't seen this bit of configuration in the tutorials I've used, would appreciate any comments that could help explain this to me.

Related

AWS IAM Role - AccessDenied error in one pod

I have a service account which I am trying to use across multiple pods installed in the same namespace.
One of the pods is created by Airflow KubernetesPodOperator.
The other is created via Helm through Kubernetes deployment.
In the Airflow deployment, I see the IAM role being assigned and DynamoDB tables are created, listed etc however in the second helm chart deployment (or) in a test pod (created as shown here), I keep getting AccessDenied error for CreateTable in DynamoDB.
I can see the AWS Role ARN being assigned to the service account and the service account being applied to the pod and the corresponding token file also being created, but I see AccessDenied exception.
arn:aws:sts::1234567890:assumed-role/MyCustomRole/aws-sdk-java-1636152310195 is not authorized to perform: dynamodb:CreateTable on resource
ServiceAccount
Name: mypipeline-service-account
Namespace: abc-qa-daemons
Labels: app.kubernetes.io/managed-by=Helm
chart=abc-pipeline-main.651
heritage=Helm
release=ab-qa-pipeline
tier=mypipeline
Annotations: eks.amazonaws.com/role-arn: arn:aws:iam::1234567890:role/MyCustomRole
meta.helm.sh/release-name: ab-qa-pipeline
meta.helm.sh/release-namespace: abc-qa-daemons
Image pull secrets: <none>
Mountable secrets: mypipeline-service-account-token-6gm5b
Tokens: mypipeline-service-account-token-6gm5b
P.S: Both the client code created using KubernetesPodOperator and through Helm chart deployment is same i.e. same docker image. Other attributes like nodeSelector, tolerations etc, volume mounts are also same.
The describe pod output for both of them is similar with just some name and label changes.
The KubernetesPodOperator pod has QoS class as Burstable while the Helm chart ones is BestEffort.
Why do I get AccessDenied in Helm deployment but not in KubernetesPodOperator? How to debug this issue?
Whenever we get an AccessDenied exception, there can be two possible reasons:
You have assigned the wrong role
The assigned role doesn't have necessary permissions
In my case, latter is the issue. The permissions assigned to particular role can be sophisticated i.e. they can be more granular.
For example, in my case, the DynamoDB tables which the role can create/describe is limited to only those that are starting with a specific prefix but not all the DynamoDB tables.
So, it is always advisable to check the IAM role permissions whenever
you get this error.
As stated in the question, be sure to check the service account using the awscli image.
Keep in mind that, there is a credential provider chain used in AWS SDKs which determines the credentials to be used by the application. In most cases, the DefaultAWSCredentialsProviderChain is used and its order is given below. Ensure that the SDK is picking up the intended provider (in our case it is WebIdentityTokenCredentialsProvider)
super(new EnvironmentVariableCredentialsProvider(),
new SystemPropertiesCredentialsProvider(),
new ProfileCredentialsProvider(),
WebIdentityTokenCredentialsProvider.create(),
new EC2ContainerCredentialsProviderWrapper());
Additionally, you might also want to set the AWS SDK classes to DEBUG mode in your logger to see which credentials provider is being picked up and why.
To check if the service account is applied to a pod, describe it and check if the AWS environment variables are set to it like AWS_REGION, AWS_DEFAULT_REGION, AWS_ROLE_ARN and AWS_WEB_IDENTITY_TOKEN_FILE.
If not, then check your service account if it has the AWS annotation eks.amazonaws.com/role-arn by describing that service account.

Error loading Namespaces. Unauthorized: Verify you have access to the Kubernetes cluster

I have created a EKS cluster using the the command line eksctl and verified that the application is working fine.
But noticing a strange issue, when i try yo access the nodes in the cluster in the web browser i see the following error
Error loading Namespaces
Unauthorized: Verify you have access to the Kubernetes cluster
I am able to see the nodes using kubectl get nodes
I am logged in as the admin user. Any help on how to workaround this would be really great. Thanks.
You will need to add your IAM role/user to your cluster's aws-auth config map
Basic steps to follow taken from https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html
kubectl edit -n kube-system configmap/aws-auth
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
mapRoles: |
- rolearn: <arn:aws:iam::111122223333:role/eksctl-my-cluster-nodegroup-standard-wo-NodeInstanceRole-1WP3NUE3O6UCF>
username: <system:node:{{EC2PrivateDNSName}}>
groups:
- <system:bootstrappers>
- <system:nodes>
mapUsers: |
- userarn: <arn:aws:iam::111122223333:user/admin>
username: <admin>
groups:
- <system:masters>
- userarn: <arn:aws:iam::111122223333:user/ops-user>
username: <ops-user>
groups:
- <system:masters>
Also seeing this error and it got introduced by the latest addition to EKS, see https://aws.amazon.com/blogs/containers/introducing-the-new-amazon-eks-console/
Since then, the console makes requests to EKS in behalf of the user or role you are logged in.
So make sure the kube-system:aws-auth configmap has that user or role added.
This user/role might not be the same you are using locally with AWS CLI, hence kubectl might work while you still see that error !
Amazon added recently (2020.12) new feature that allows you to browse workloads inside cluster from Aws Console.
If you miss permissions you will get that error.
What permissions are needed is described here
https://docs.aws.amazon.com/eks/latest/userguide/security_iam_id-based-policy-examples.html#policy_example3
This might as well be because you created the AWS EKS cluster using a different IAM user than the one currently logged into the AWS Management Console hence the IAM user currently logged into the AWS Management Console does not have permissions to view the namespaces on the AWS EKS cluster.
Try logging in to the AWS Management Console using the IAM user credentials of the user who created the AWS EKS cluster, the issue should be fixed.

How to assign AWS IAM Role to Service Account with Terraform?

I have a Kubernetes EKS cluster on AWS, an my goal is to be able to watch particular config maps in my Spring Boot application.
On my local environment everything works correctly, but when I use this setup inside AWS I get forbidden state and my application fails to run.
I've created a Service Account but don't understand how to create Terraform script which can assign the needed IAM Role.
Any help would be appreciated.
This depends on several things.
An AWS IAM Role can be provided to Pods in different ways, but the recommended way now is to use IAM Roles for Service Accounts, IRSA.
Depending on how you provision the Kubernetes cluster with Terraform, this is also done in different ways. If you use AWS EKS and provision the cluster using the Terraform AWS EKS module, then you should set enable_irsa to true.
You then need to create an IAM Role for you application (Pods), and you need to return the ARN for the IAM Role. This can be done using the aws_iam_role resource.
You need to create a Kubernetes ServiceAccount for your pod, it can be created with Terraform, but many want to use Yaml for Kubernetes resources. The ServiceAccount need to be annotated with the IAM Role ARN, like:
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::14xxx84:role/my-iam-role
See the EKS workshop for IAM Roles for Service Accounts lesson for a guide through this. However, it does not use Terraform.
First I created the necessary role using below code:
data "aws_iam_policy_document" "eks_pods" {
statement {
actions = ["sts:AssumeRoleWithWebIdentity"]
effect = "Allow"
condition {
test = "StringEquals"
variable = "${replace(aws_iam_openid_connect_provider.eks.url, "https://", "")}:sub"
values = ["system:serviceaccount:kube-system:aws-node"]
}
principals {
identifiers = [aws_iam_openid_connect_provider.eks.arn]
type = "Federated"
}
}
}
# create a role that can be attached to pods.
resource "aws_iam_role" "eks_pods" {
assume_role_policy = data.aws_iam_policy_document.eks_pods.json
name = "eks-pods-iam-role01"
depends_on = [aws_iam_openid_connect_provider.eks]
}
resource "aws_iam_role_policy_attachment" "aws_pods" {
role = aws_iam_role.eks_pods.name
policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
depends_on = [aws_iam_role.eks_pods]
}
Then I used the below command to attach the role created to the service account. I have not found any way to do it from within terraform:
kubectl annotate serviceaccount -n kube-system aws-node eks.amazonaws.com/role-arn=arn:aws:iam::<your_account>:role/eks-pods-iam-role01
Then you can verify your service account, it should show the new annotations.
kubectl describe sa aws-node -n kube-system
Name: aws-node
Namespace: kube-system
Labels: <none>
Annotations: eks.amazonaws.com/role-arn: arn:aws:iam::<ur_account>:role/eks-pods-iam-role01
Image pull secrets: <none>
Mountable secrets: aws-node-token-xxxxx
Tokens: aws-node-token-xxxxx
Events: <none>
I think you missed something here..you should add trust relationship between the role and the oidc provider as described here:

AWS EKS kubectl - No resources found in default namespace

Trying to setup a EKS cluster.
An error occurred (AccessDeniedException) when calling the DescribeCluster operation: Account xxx is not authorized to use this service. This error came form the CLI, on the console I was able to crate the cluster and everything successfully.
I am logged in as the root user (its just my personal account).
It says Account so sounds like its not a user/permissions issue?
Do I have to enable my account for this service? I don't see any such option.
Also if login as a user (rather than root) - will I be able to see everything that was earlier created as root. I have now created a user and assigned admin and eks* permissions. I checked this when I sign in as the user - I can see everything.
The aws cli was setup with root credentials (I think) - so do I have to go back and undo fix all this and just use this user.
Update 1
I redid/restarted everything including user and awscli configure - just to make sure. But still the issue did not get resolved.
There is an option to create the file manually - that finally worked.
And I was able to : kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.100.0.1 443/TCP 48m
KUBECONFIG: I had setup the env:KUBECONFIG
$env:KUBECONFIG="C:\Users\sbaha\.kube\config-EKS-nginixClstr"
$Env:KUBECONFIG
C:\Users\sbaha\.kube\config-EKS-nginixClstr
kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* aws kubernetes aws
kubectl config current-context
aws
My understanding is is I should see both the aws and my EKS-nginixClstr contexts but I only see aws - is this (also) a issue?
Next Step is to create and add worker nodes. I updated the node arn correctly in the .yaml file: kubectl apply -f ~\.kube\aws-auth-cm.yaml
configmap/aws-auth configured So this perhaps worked.
But next it fails:
kubectl get nodes No resources found in default namespace.
On AWS Console NodeGrp shows- Create Completed. Also on CLI kubectl get nodes --watch - it does not even return.
So this this has to be debugged next- (it never ends)
aws-auth-cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn: arn:aws:iam::arn:aws:iam::xxxxx:role/Nginix-NodeGrpClstr-NodeInstanceRole-1SK61JHT0JE4
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
This problem was related to not having the correct version of eksctl - it must be at least 0.7.0. The documentation states this and I knew this, but initially whatever I did could not get beyond 0.6.0. The way you get it is to configure your AWS CLI to a region that supports EKS. Once you get 0.7.0 this issue gets resolved.
Overall to make EKS work - you must have the same user both on console and CLI, and you must work on a region that supports EKS, and have correct eksctl version 0.7.0.

"kubectl" not connecting to aws EKS cluster from my local windows workstation

I am trying to setup aws EKS cluster and want to connect that cluster from my local windows workstation. Not able to connect that. Here are the steps i did;
Create a aws service role (aws console -> IAM -> Roles -> click "Create role" -> Select AWS service role "EKS" -> give role name "eks-role-1"
Create another user in IAM named "eks" for programmatic access. this will help me to connect my EKS cluster from my local windows workstation. Policy i added into it is "AmazonEKSClusterPolicy", "AmazonEKSWorkerNodePolicy", "AmazonEKSServicePolicy", "AmazonEKS_CNI_Policy".
Next EKS cluster has been created with roleARN, which has been created in Step#1. Finally EKS cluster has been created in aws console.
In my local windows workstation, i have download "kubectl.exe" & "aws-iam-authenticator.exe" and did 'aws configure' using accesskey and token from step#2 for the user "eks". After configuring "~/.kube/config"; i ran below command and get error like this:
Command:kubectl.exe get svc
output:
could not get token: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
could not get token: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
could not get token: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
could not get token: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
could not get token: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
Unable to connect to the server: getting credentials: exec: exit status 1
Not sure what wrong setup here. Can someone pls help? I know some of the places its saying you have to use same aws user to connect cluster (EKS). But how can i get accesskey and token for aws assign-role (step#2: eks-role-1)?
For people got into this, may be you provision eks with profile.
EKS does not add profile inside kubeconfig.
Solution:
export AWS credential
$ export AWS_ACCESS_KEY_ID=xxxxxxxxxxxxx
$ export AWS_SECRET_ACCESS_KEY=ssssssssss
If you've already config AWS credential. Try export AWS_PROFILE
$ export AWS_PROFILE=ppppp
Similar to 2, but you just need to do one time. Edit your kubeconfig
users:
- name: eks # This depends on your config.
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
command: aws-iam-authenticator
args:
- "token"
- "-i"
- "general"
env:
- name: AWS_PROFILE
value: "<YOUR_PROFILE_HERE>" #
I think i got the answer for this issue; want to write down here so people will be benefit out of it.
When you first time creating EKS cluster; check from which you are (check your aws web console user setting) creating. Even you are creating from CFN script, also assign different role to create the cluster. You have to get CLI access for the user to start access your cluster from kubectl tool. Once you get first time access (that user will have admin access by default); you may need to add another IAM user into cluster admin (or other role) using congifMap; then only you can switch or use alternative IAM user to access cluster from kubectl command line.
Make sure the file ~/.aws/credentials has a AWS key and secret key for an IAM account that can manage the cluster.
Alternatively you can set the AWS env parameters:
export AWS_ACCESS_KEY_ID=xxxxxxxxxxxxx
export AWS_SECRET_ACCESS_KEY=ssssssssss
Adding another option.
Instead of working with aws-iam-authenticator you can change the command to aws and replace the args as below:
- name: my-cluster
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args: #<--- Change the args
- --region
- <YOUR_REGION>
- eks
- get-token
- --cluster-name
- my-cluster
command: aws #<--- Change to command to aws
env:
- name: AWS_PROFILE
value: <YOUR_PROFILE_HERE>