Container Insights on Amazon EKS AccessDeniedException - amazon-web-services

I'm trying to add a Container Insight to my EKS cluster but running into a bit of an issue when deploying. According to my logs, I'm getting the following:
[error] [output:cloudwatch_logs:cloudwatch_logs.2] CreateLogGroup API responded with error='AccessDeniedException'
[error] [output:cloudwatch_logs:cloudwatch_logs.2] Failed to create log group
The strange part about this is the role it seems to be assuming is the same role found within my EC2 worker nodes rather than the role for the service account I have created. I'm creating the service account and can see it within AWS successfully using the following command:
eksctl create iamserviceaccount --region ${env:AWS_DEFAULT_REGION} --name cloudwatch-agent --namespace amazon-cloudwatch --cluster ${env:CLUSTER_NAME} --attach-policy-arn arn:aws:iam::aws:policy/CloudWatchAgentServerPolicy --override-existing-serviceaccounts --approve
Despite the serviceaccount being created successfully, I continue to get my AccessDeniedException.
One thing I found was the logs work fine when I manually add the CloudWatchAgentServerPolicy to my worker nodes, however this is not the implementation I would like and instead would rather have an automative way of adding the service account and not touching the worker nodes directly if possible. The steps I followed can be found at the bottom of this documentation.
Thanks so much!

For anyone running into this issue: within the quickstart yaml, there is a fluent-bit service account that must be removed from that file and created manually. For me I created it using the following command:
eksctl create iamserviceaccount --region ${env:AWS_DEFAULT_REGION} --name fluent-bit --namespace amazon-cloudwatch --cluster ${env:CLUSTER_NAME} --attach-policy-arn arn:aws:iam::aws:policy/CloudWatchAgentServerPolicy --override-existing-serviceaccounts --approve
Upon running this command and removing the fluent-bit service account from the yaml, delete and reapply al your amazon-cloudwatch namespace items and it should be working.

Related

How to switch credentials kubectl?

I use the clusters on different AWS accounts and every time I want to connect via kubectl I have to do aws configure .
How to make other credentials transferred to kubectl ?
I think that I pass the --kubeconfig, I can pass the credentials, but I can't figure out how to do it
you can manage the different clusters with a single kubeconfig as well.
all you need change context for each account cluster with single file
kubectl config get-contexts
but if you still looking for other approaches then you can create kubeconfig for each account.
export KUBECONFIG=~/.kube/aws-account-1
aws eks update-kubeconfig --name my-clust --region us-east-1 --profile account-1
for second account
export KUBECONFIG=~/.kube/aws-account-2
aws eks update-kubeconfig --name my-clust --region us-east-1 --profile account-2
Now if you want to manage the cluster of account1, set kubeconfig to account1.
export KUBECONFIG=~/.kube/aws-account-1
kubectl get pods

eksctl create iamserviceaccount with EKS add-on support for ADOT Operator

I am attempting to install the AWS Distro for OpenTelemetry (ADOT) into my EKS cluster.
https://docs.aws.amazon.com/eks/latest/userguide/adot-reqts.html
I am following this guide to create the service account for the IAM role (irsa technique in AWS):
https://docs.aws.amazon.com/eks/latest/userguide/adot-iam.html
When I run the eksctl commands:
eksctl create iamserviceaccount \
--name adot-collector \
--namespace monitoring \
--cluster <MY CLUSTER> \
--attach-policy-arn arn:aws:iam::aws:policy/AmazonPrometheusRemoteWriteAccess \
--attach-policy-arn arn:aws:iam::aws:policy/AWSXrayWriteOnlyAccess \
--attach-policy-arn arn:aws:iam::aws:policy/CloudWatchAgentServerPolicy \
--approve \
--override-existing-serviceaccounts
I am getting this output:
2 existing iamserviceaccount(s) (hello-world/default,monitoring/adot-collector) will be excluded
iamserviceaccount (monitoring/adot-collector) was excluded (based on the include/exclude rules)
metadata of serviceaccounts that exist in Kubernetes will be updated, as --override-existing-serviceaccounts was set
no tasks
This Kubernetes service account does not exist in the target namespace or in any other:
k get sa adot-collector -n monitor
k get serviceAccounts -A | grep abot
Expected output:
1 iamserviceaccount (monitoring/adot-collector) was included (based on the include/exclude rules)
metadata of serviceaccounts that exist in Kubernetes will be updated, as --override-existing-serviceaccounts was set
...
created serviceaccount "monitoring/adot-collector"
When I check in the AWS Console under CloudFormation , I see that the stack was complete, with a message of "IAM role for serviceaccount "monitoring/adot-collector" [created and managed by eksctl]"
What can I do to troubleshoot this? Why is the Kubernetes service account not getting built?
This was resolved after discovering there as a ValidatingWebhookConfiguration that was blocking the creating of service accounts without a specific label. Temporarily disabling the webhook enabled the Stack to run to completion, and the SA was created.

How can I get authed to run kubectl commands in EKS when I'm not the creator of the cluster && the creator/admin isn't available?

I have an EKS cluster that's not created by me. I want to operate the cluster by running kubectl commands, but I keep getting "error: You must be logged in to the server (Unauthorized)".
e.g.,
$ kubectl get pod
error: You must be logged in to the server (Unauthorized)
My IAM user account has AdminFullAccess so I believe I'm blocked by the permission of Kubernetes. According to the AWS doc, someone who didn't create the cluster needs to ask the owner or the admin to modify aws-auth ConfigMap, but they already left the company. Is there any way to solve the problem?
Perhaps there is still creator's account in IAM. If so, you can reset API-access keys and generate kubeconfig anew using their IAM-account. Assuming you've already configured awscli for their account:
aws eks --region <region-code> update-kubeconfig --name <cluster_name>
Once you did that, you will get full access to the cluster just as the person who had left had. I recommend use this opportunity to patch aws-auth to enable access from your own account.

Unable to create a fargate profile for the AWS EKS cluster

I have an AWS EKS cluster named xyz-cicd in the regios Ohio(us-east-2) which I had created using the eksctl command like below:-
eksctl create cluster --name xyz-cicd --region us-east-2 --fargate
It took some time to create a cluster with a default profile however I want to create a new profile for the same cluster so I ran the following command which is giving me an error:-
vinod827#Vinods-MacBook-Pro cicd % eksctl create fargateprofile \
--cluster xyz-cicd \
--name cicd \
--namespace cicd
Error: fetching cluster status to determine operability: unable to describe cluster control plane: ResourceNotFoundException: No cluster found for name: xyz-cicd.
{
RespMetadata: {
StatusCode: 404,
RequestID: "c12bd05c-3eb6-40bf-a972-f1cba139ea9a"
},
Message_: "No cluster found for name: xyz-cicd."
}
vinod827#Vinods-MacBook-Pro cicd %
Please note there is no issue with the cluster name or region. The cluster does exists in this same region but not sure why the eksctl command is returning the error stating no cluster found with the same name. I can schedule a pod on the default profile if that were the case. Please advise, thanks
Your second command is missing the region parameter and therefore probably using a different region. That is why it not finding your cluster.

How do you get kubectl to log in to an AWS EKS cluster?

Starting from a ~empty AWS account, I am trying to follow https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html
So that meant I created a VPS stack, then installed aws-iam-authenticator, awscli and kubectl, then created an IAM user with Programmatic access and AmazonEKSAdminPolicy directly
attached.
Then I used the website to create my EKS cluster and used aws configure to set the access key and secret of my IAM user.
aws eks update-kubeconfig --name wr-eks-cluster worked fine, but:
kubectl get svc
error: the server doesn't have a resource type "svc"
I continued anyway, creating my worker nodes stack, and now I'm at a dead-end with:
kubectl apply -f aws-auth-cm.yaml
error: You must be logged in to the server (the server has asked for the client to provide credentials)
aws-iam-authenticator token -i <my cluster name> seems to work fine.
The thing I seem to be missing is that when you create the cluster you specify an IAM role, but when you create the user (according to the guide) you attach a policy. How is my user supposed to have access to this cluster?
Or ultimately, how do I proceed and gain access to my cluster using kubectl?
As mentioned in docs, the AWS IAM user created EKS cluster automatically receives system:master permissions, and it's enough to get kubectl working. You need to use this user credentials (AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY) to access the cluster. In case you didn't create a specific IAM user to create a cluster, then you probably created it using root AWS account. In this case, you can use root user credentials (Creating Access Keys for the Root User).
The main magic is inside aws-auth ConfigMap in your cluster – it contains IAM entities -> kubernetes ServiceAccount mapping.
I'm not sure about how do you pass credentials for the aws-iam-authenticator:
If you have ~/.aws/credentials with aws_profile_of_eks_iam_creator then you can try $ AWS_PROFILE=aws_profile_of_eks_iam_creator kubectl get all --all-namespaces
Also, you can use environment variables $ AWS_ACCESS_KEY_ID=XXX AWS_SECRET_ACCESS_KEY=YYY AWS_DEFAULT_REGION=your-region-1 kubectl get all --all-namespaces
Both of them should work, because kubectl ... will use generated ~/.kube/config that contains aws-iam-authenticator token -i cluster_name command. aws-iam-authenticator uses environment variables or ~/.aws/credentials to give you a token.
Also, this answer may be useful for the understanding of the first EKS user creation.
Here are my steps using the aws-cli
$ export AWS_ACCESS_KEY_ID="something"
$ export AWS_SECRET_ACCESS_KEY="something"
$ export AWS_SESSION_TOKEN="something"
$ aws eks update-kubeconfig \
--region us-west-2 \
--name my-cluster
>> Added new context arn:aws:eks:us-west-2:#########:cluster/my-cluster to /home/john/.kube/config
Bonus, use kubectx to switch kubectl contexts
$ kubectx
>> arn:aws:eks:us-west-2:#########:cluster/my-cluster-two arn:aws:eks:us-east-1:#####:cluster/my-cluster
$ kubectx arn:aws:eks:us-east-1:#####:cluster/my-cluster
>> Switched to context "arn:aws:eks:us-east-1:#####:cluster/my-cluster".
Ref: https://docs.aws.amazon.com/eks/latest/userguide/getting-started-console.html
After going over the comments I think it seems that you:
Have created the cluster with the root user.
Then created an IAM user and created AWS credentials (AWS_ACCESS_KEY_ID andAWS_SECRET_ACCESS_KEY) for it.
Used these access and secret key in your kubeconfig settings (doesn't matter how - there are multiple ways for that).
And here is the problem as described in the docs:
If you receive one of the following errors while running kubectl
commands, then your kubectl is not configured properly for Amazon EKS
or the IAM user or role credentials that you are using do not map to a
Kubernetes RBAC user with sufficient permissions in your Amazon EKS
cluster.
could not get token: AccessDenied: Access denied
error: You must be logged in to the server (Unauthorized)
error: the server doesn't have a resource type "svc" <--- Your case
This could be because the cluster was created with one set of AWS
credentials (from an IAM user or role), and kubectl is using a
different set of credentials.
When an Amazon EKS cluster is created, the IAM entity (user or role)
that creates the cluster is added to the Kubernetes RBAC authorization
table as the administrator (with system:masters permissions).
Initially, only that IAM user can make calls to the Kubernetes API
server using kubectl.
For more information, see Managing users or IAM
roles for your cluster. If you use the console to create the cluster,
you must ensure that the same IAM user credentials are in the AWS SDK
credential chain when you are running kubectl commands on your
cluster.
This is the cause for the errors.
As the accepted answer described - you'll need to edit aws-auth in order to manage users or IAM roles for your cluster.
Once you have setup the aws config on your system, check the current identity to verify that you're using the correct credentials that have permissions for the Amazon EKS cluster:
aws sts get-caller-identity
Afterwards use:
aws eks --region region update-kubeconfig --name cluster_name
This will create kubeconfig at your home path with required kubernetes API server url at $HOME/.kube/config.
Afterwards you can follow the kubectl instructions for installation and this should work.
For those working with multiple profiles in aws cli.
Here is what my setup looks like:
~/.aws/credentials file:
1 [prod]
2 aws_access_key_id=****
3 aws_secret_access_key=****
4 region=****
11 [dev]
12 aws_access_key_id=****
13 aws_secret_access_key=****
I have two aws profiles prod and dev.
Generate kubeconfig for both prod and dev clusters using
$ aws eks --region <region> update-kubeconfig --name <cluster_name> --profile dev
$ aws eks --region <region> update-kubeconfig --name <cluster_name> --profile prod
This profile metadata is stored in the config file (~/.kube/config) as well.
Use kubectx to view/change current cluster, and kubens to switch namespace within cluster.
$ kubectx
arn:aws:eks:region:accontid:cluster/dev
arn:aws:eks:region:accontid:cluster/prod
Switch to dev cluster.
$ kubectx arn:aws:eks:region:accountid:cluster/dev
Switched to context "arn:aws:eks:region:accountid:cluster/dev".
Similarly we can view/change namespace in the current cluster using kubens.
please use your updated secret key & access key id for connection with EKS cluster.