Where can I view service account created by `eksctl`? - amazon-web-services

I create a EKS cluster in AWS and use this command to create a service account eksctl create iamserviceaccount --name alb-ingress-controller --cluster $componentName --attach-policy-arn $serviceRoleArn --approve --override-existing-serviceaccounts.
The output of the command is:
[ℹ] using region ap-southeast-2
[ℹ] 1 existing iamserviceaccount(s) (default/alb-ingress-controller) will be excluded
[ℹ] 1 iamserviceaccount (default/alb-ingress-controller) was excluded (based on the include/exclude rules)
[!] metadata of serviceaccounts that exist in Kubernetes will be updated, as --override-existing-serviceaccounts was set
[ℹ] no tasks
I am not sure whether it is created successfully or not.
I use this command eksctl get iamserviceaccount to verify the result but get an error response:
Error: getting iamserviceaccounts: no output "Role1" in stack "eksctl-monitor-addon-iamserviceaccount-default-alb-ingress-controller"
I also tried to run kubectl get serviceaccount but I got the error: Error from server (NotFound): serviceaccounts "alb-ingress-controller" not found.
Does this mean the service account failed to create? Where can I view the service account in AWS console? or where can I view the error?

As per the error, it means serviceaccount already exists.
For getting the service account use kubectl
kubectl get serviceaccount <SERVICE_ACCOUNT_NAME> -n kube-system -o yaml
The order is, create the IAM-role, and after that – RBAC Role and binding.
Below is command in case you want to override the existing serviceaccount
eksctl --profile <PROFILE_NAME> \
--region=ap-northeast-2 \
create iamserviceaccount \
--name alb-ingress-controller \
--namespace kube-system \
--override-existing-serviceaccounts \
--approve --cluster <CLUSTER_NAME> \
--attach-policy-arn \
arn:aws:iam::ACCOUNT_ID:policy/ALBIngressControllerIAMPolicy
I found this workshop Amazon EKS Workshop very helpful during my venture into EKS.
More information pertaining to ALB can be found here
EDIT
from this error
[ℹ] 1 existing iamserviceaccount(s) (default/alb-ingress-controller) will be excluded
It seems like the service accounts is created inside the default namespace.
so the command to check the serviceaccount will be
kubectl get serviceaccount <SERVICE_ACCOUNT_NAME> -n default-o yaml

eksctl uses CloudFormation to create the resources so you probably will find the cause of the error there.
Go to CloudFormation console in AWS
Find the stack with the name eksctl-[CLUSTER NAME]-addon-iamserviceaccount-default-[SERVICE ACCOUNT NAME], it should have the ROLLBACK_COMPLETE status.
Select the "events" tab and scroll to the first error
In my case, the cause was a missing policy that I was attaching to the role.

works as expected, thanks #samtoddler ! 😎
1 Create the IAM policy for the IAM Role 👏
curl -o iam_policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.3.0/docs/install/iam_policy.json
aws-vault exec Spryker-Humanetic-POC -- aws iam create-policy \
--policy-name AWSLoadBalancerControllerIAMPolicy \
--policy-document file://iam_policy.json
2 Create the IAM role and attach it to the newly created ServiceAccount 👏
eksctl create iamserviceaccount \
--cluster education-eks-7yby62S7 \
--namespace kube-system \
--name aws-load-balancer-controller \
--attach-policy-arn arn:aws:iam::ACCOUNT_ID:policy/AWSLoadBalancerControllerIAMPolicy \
--approve
3.1 Verify # 1 that ServiceAccount lives in --namespace kube-system 👏
kubectl get sa aws-load-balancer-controller -n kube-system -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::ACCOUNT_ID:role/eksctl-education-eks-7yby62S7-addon-iamservi-Role1-126OXTBKF3WBM
creationTimestamp: "2021-12-12T17:38:48Z"
labels:
app.kubernetes.io/managed-by: eksctl
name: aws-load-balancer-controller
namespace: kube-system
resourceVersion: "686442"
uid: 895f6f34-ab04-4bca-aeac-1b6b75766546
secrets:
- name: aws-load-balancer-controller-token-gcd5c
3.2 Verify #2 👏
kubectl get sa aws-load-balancer-controller -n kube-system
NAME SECRETS AGE
aws-load-balancer-controller 1 123m
Hope it will help! 🧗🏼‍♀️

Related

Why did eksctl create iamserviceaccount failed? waiter state transitioned to Failure

I am running command
eksctl create iamserviceaccount --name efs-csi-controller-sa --namespace kube-system --cluster mmpana --attach-policy-arn arn:aws:iam::12345678:policy/EKS_EFS_CSI_Driver_Policy --approve --override-existing-serviceaccounts --region us-east-1
I got error
2023-02-07 13:36:36 [ℹ] 1 error(s) occurred and IAM Role stacks haven't been created properly, you may wish to check CloudFormation console
2023-02-07 13:36:36 [✖] waiter state transitioned to Failure
Then I checked Cloudformation stacks
and
I upgraded eksctl yesterday
eksctl version
0.128.0
I am looking now at my policy
How to fix this?

How to check why my service account is not created? 1 existing iamserviceaccount

My command to create
eksctl create iamserviceaccount --name efs-csi-controller-sa --namespace kube-system --cluster ciga --attach-policy-arn arn:aws:iam::$xxxxxx:policy/EKS_EFS_CSI_Driver_Policy --approve --override-existing-serviceaccounts --region us-east-1
I got
2023-02-04 18:12:40 [ℹ] 1 existing iamserviceaccount(s) (kube-system/efs-csi-controller-sa) will be excluded
2023-02-04 18:12:40 [ℹ] 1 iamserviceaccount (kube-system/efs-csi-controller-sa) was excluded (based on the include/exclude rules)
2023-02-04 18:12:40 [!] metadata of serviceaccounts that exist in Kubernetes will be updated, as --override-existing-serviceaccounts was set
2023-02-04 18:12:40 [ℹ] no tasks
I can not find it in kube-system
kubectl get serviceaccount efs-csi-controller-sa -n kube-system -o yaml
Error from server (NotFound): serviceaccounts "efs-csi-controller-sa" not found
What is wrong with my eksctl create command?

eksctl create iamserviceaccount with EKS add-on support for ADOT Operator

I am attempting to install the AWS Distro for OpenTelemetry (ADOT) into my EKS cluster.
https://docs.aws.amazon.com/eks/latest/userguide/adot-reqts.html
I am following this guide to create the service account for the IAM role (irsa technique in AWS):
https://docs.aws.amazon.com/eks/latest/userguide/adot-iam.html
When I run the eksctl commands:
eksctl create iamserviceaccount \
--name adot-collector \
--namespace monitoring \
--cluster <MY CLUSTER> \
--attach-policy-arn arn:aws:iam::aws:policy/AmazonPrometheusRemoteWriteAccess \
--attach-policy-arn arn:aws:iam::aws:policy/AWSXrayWriteOnlyAccess \
--attach-policy-arn arn:aws:iam::aws:policy/CloudWatchAgentServerPolicy \
--approve \
--override-existing-serviceaccounts
I am getting this output:
2 existing iamserviceaccount(s) (hello-world/default,monitoring/adot-collector) will be excluded
iamserviceaccount (monitoring/adot-collector) was excluded (based on the include/exclude rules)
metadata of serviceaccounts that exist in Kubernetes will be updated, as --override-existing-serviceaccounts was set
no tasks
This Kubernetes service account does not exist in the target namespace or in any other:
k get sa adot-collector -n monitor
k get serviceAccounts -A | grep abot
Expected output:
1 iamserviceaccount (monitoring/adot-collector) was included (based on the include/exclude rules)
metadata of serviceaccounts that exist in Kubernetes will be updated, as --override-existing-serviceaccounts was set
...
created serviceaccount "monitoring/adot-collector"
When I check in the AWS Console under CloudFormation , I see that the stack was complete, with a message of "IAM role for serviceaccount "monitoring/adot-collector" [created and managed by eksctl]"
What can I do to troubleshoot this? Why is the Kubernetes service account not getting built?
This was resolved after discovering there as a ValidatingWebhookConfiguration that was blocking the creating of service accounts without a specific label. Temporarily disabling the webhook enabled the Stack to run to completion, and the SA was created.

my first EKS - error: You must be logged in to the server (Unauthorized)

I've seen this error in many previous questions, but as a complete beginner I'm not sure if those questions were relevant to my situation.
I've created a cluster called wiz_try using the AWS EKS gui.
When it asked for a role, I've created a new one with EKS policy AmazonEKSClusterPolicy.
I've download AWS ICL and configured it. Also downloaded Kubectl.
checked that I see the cluster:
aws eks --region eu-central-1 describe-cluster --name wiz_try --query cluster.status
ACTIVE
ran
aws eks update-kubeconfig --name wiz_try --region eu-central-1
this has created a config file:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: *******
server: https://***.gr7.eu-central-1.eks.amazonaws.com
name: arn:aws:eks:eu-central-1:399222411859:cluster/wiz_try
contexts:
- context:
cluster: arn:aws:eks:eu-central-1:399222411859:cluster/wiz_try
user: arn:aws:eks:eu-central-1:399222411859:cluster/wiz_try
name: arn:aws:eks:eu-central-1:399222411859:cluster/wiz_try
current-context: arn:aws:eks:eu-central-1:399222411859:cluster/wiz_try
kind: Config
preferences: {}
users:
- name: arn:aws:eks:eu-central-1:399222411859:cluster/wiz_try
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- --region
- eu-central-1
- eks
- get-token
- --cluster-name
- wiz_try
command: aws
kubectl get svc
error: You must be logged in to the server (Unauthorized)
I tried solving by running
aws eks update-kubeconfig --name wiz_try --region eu-central-1 --role-arn arn:aws:iam::***:role/EKS_cluster_role
as this is the role I created, but this is not helping.
what am I missing?
Thanks

kubectl error You must be logged in to the server (Unauthorized) - EKS cluster

I am new to EKS and Kubernetes -
Here is what happened
A EKS cluster was created with a specific IAM Role
When trying to connect to the cluster with kubectl commands it was throwing
error You must be logged in to the server (Unauthorized)
I followed the steps detailed here
https://aws.amazon.com/premiumsupport/knowledge-center/amazon-eks-cluster-access/
Assumed to the role that created the EKS cluster
Exported them to new profile dev in aws credentials
Ran AWS_PROFILE=dev kubectl get nodes. It was able to list all my nodes.
Note: I had already run aws eks --region <region> update-kubeconfig --name <cluster-name>
Now I tried to add the role/SAML User that is trying to access the EKS cluster by applying the configmap as below and ran AWS_PROFILE=dev kubectl apply -f aws-auth.yaml
aws-auth.yaml being
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn: arn:aws:sts::******:assumed-role/aws_dev/abc#def.com
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
notice the role arn is a SAML User assumed to aws_dev role that tries to connect to the cluster.
Once this was applied, the response was configmap/aws-auth configured
I now tried to execute kubectl get nodes without the AWS_PROFILE=dev and it fails again with error You must be logged in to the server (Unauthorized).
I also executed AWS_PROFILE=dev kubectl get nodes which previously worked but fails now.
I am guessing the aws-auth information messed up and is there a way to revert the kubectl apply that was done above.
any kubectl command fails now. What might be happening? How can I rectify this?
You get an authorization error when your AWS Identity and Access Management (IAM) entity isn't authorized by the role-based access control (RBAC) configuration of the Amazon EKS cluster. This happens when the Amazon EKS cluster is created by an IAM user or role that's different from the one used by aws-iam-authenticator.
Check the resolution here.
kubectl error You must be logged in to the server (Unauthorized) when accessing EKS cluster
Recreate the cluster and when you get to step 6 in the link add a second role (or user) to your aws-auth.yaml, like this:
Get ConfigMap with kubectl get cm -n kube-system aws-auth -o yaml
Add your role as a second item to the ConfigMap (don't change the first one):
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn: arn:aws:sts::******:assumed-role/aws_dev/abc#def.com
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
### Add only this (assuming you're using a role)
- rolearn: <ARN of your IAM role>
username: <any name>
groups:
- system:masters
Run AWS_PROFILE=dev kubectl apply -f aws-auth.yaml
Then get the kubeconfig with your temporary IAM role credentials with aws eks --region <region> update-kubeconfig --name <cluster-name>
You probably changed the aws-auth config.
Generally when you create a cluster, the user (or role) who created that cluster has admin rights, when you switch users you need to add them to the config (done as the user who created the cluster).