kubectl error querying EC2 for volume info - amazon-web-services

I'm running Kubernetes v1.4.0+776c994 on an EC2 instance in AWS GovCloud.
I can list EC2 volumes with 'aws ec2 describe-volumes', but when I try to create a persistent volume, 'kubectl create -f aws-pv.yaml', I get this error:
{
"kind":"Status",
"apiVersion":"v1",
"metadata":{},
"status":"Failure",
"message":"persistentvolumes \"pv0001\" is forbidden: error querying AWS EBS volume vol-05dffe55de3ac725b: error querying ec2 for volume info: error listing AWS volumes: UnauthorizedOperation: You are not authorized to perform this operation.\n\tstatus code: 403, request id:",
"reason":"Forbidden",
"details": {
"name":"pv0001",
"kind":"persistentvolumes"
},
"code":403
}
I've set these environment variables:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_REGION=us-gov-west-1
CURL_CA_BUNDLE=/etc/origin/master/ca.crt

My IAM role, as the AWS user dvogel, allows me to successfully run the query 'aws ec2 describe-volumes', but apparently my permissions aren't passed to the Kubernetes api when I run 'kubectl create -f aws-pv.yaml' in the same terminal. I'm guessing I need to set something, in admin.kubeconfig?, to do this.

Related

Failed to create Elasticache redis cluster

I was using the following command to create a Elasticache redis cluster via CLI but it always failed at the end, when I switch to the AWS console I can first see the creating status but after a while it will always fail, is there a way to view the creation logs in AWS console?
aws elasticache create-replication-group --cache-subnet-group group-name --engine redis --engine-version 6.x --security-group-ids security-group-id --num-node-groups 22 --replicas-per-node-group 2 --cache-parameter-group-name parameter-group-name --auto-minor-version-upgrade --replication-group-id some-group-id --replication-group-description 'some description' --cache-node-type cache.r6g.2xlarge --region some-region --automatic-failover-enabled

Unable to kubectl apply -f to my EKS cluster : Access denied (Roles may not be assumed by root accounts)

I created a Kubernetes cluster on AWS EKS and now I'm trying to deploy an app using cmd :
kubectl apply -f deployment.yaml
But I'm getting this error :
An error occurred (AccessDenied) when calling the AssumeRole operation: Roles may not be assumed by root accounts.
Unable to connect to the server: getting credentials: exec: executable aws failed with exit code 254
I'm new to AWS and EKS and when I did some Google research it says that it might be caused by the authenticated user in aws cli tool.
For information, my cli is connected with a user called aws_cli_user who has AdministratorAccess strategy.
And for creating k8s cluster, I created a role called EksSocaClusterRole who has AmazonEKSClusterPolicy.
I think I'm missing something to link both the role and the user so It can push things to the cluster correctly !

How do you get kubectl to log in to an AWS EKS cluster?

Starting from a ~empty AWS account, I am trying to follow https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html
So that meant I created a VPS stack, then installed aws-iam-authenticator, awscli and kubectl, then created an IAM user with Programmatic access and AmazonEKSAdminPolicy directly
attached.
Then I used the website to create my EKS cluster and used aws configure to set the access key and secret of my IAM user.
aws eks update-kubeconfig --name wr-eks-cluster worked fine, but:
kubectl get svc
error: the server doesn't have a resource type "svc"
I continued anyway, creating my worker nodes stack, and now I'm at a dead-end with:
kubectl apply -f aws-auth-cm.yaml
error: You must be logged in to the server (the server has asked for the client to provide credentials)
aws-iam-authenticator token -i <my cluster name> seems to work fine.
The thing I seem to be missing is that when you create the cluster you specify an IAM role, but when you create the user (according to the guide) you attach a policy. How is my user supposed to have access to this cluster?
Or ultimately, how do I proceed and gain access to my cluster using kubectl?
As mentioned in docs, the AWS IAM user created EKS cluster automatically receives system:master permissions, and it's enough to get kubectl working. You need to use this user credentials (AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY) to access the cluster. In case you didn't create a specific IAM user to create a cluster, then you probably created it using root AWS account. In this case, you can use root user credentials (Creating Access Keys for the Root User).
The main magic is inside aws-auth ConfigMap in your cluster – it contains IAM entities -> kubernetes ServiceAccount mapping.
I'm not sure about how do you pass credentials for the aws-iam-authenticator:
If you have ~/.aws/credentials with aws_profile_of_eks_iam_creator then you can try $ AWS_PROFILE=aws_profile_of_eks_iam_creator kubectl get all --all-namespaces
Also, you can use environment variables $ AWS_ACCESS_KEY_ID=XXX AWS_SECRET_ACCESS_KEY=YYY AWS_DEFAULT_REGION=your-region-1 kubectl get all --all-namespaces
Both of them should work, because kubectl ... will use generated ~/.kube/config that contains aws-iam-authenticator token -i cluster_name command. aws-iam-authenticator uses environment variables or ~/.aws/credentials to give you a token.
Also, this answer may be useful for the understanding of the first EKS user creation.
Here are my steps using the aws-cli
$ export AWS_ACCESS_KEY_ID="something"
$ export AWS_SECRET_ACCESS_KEY="something"
$ export AWS_SESSION_TOKEN="something"
$ aws eks update-kubeconfig \
--region us-west-2 \
--name my-cluster
>> Added new context arn:aws:eks:us-west-2:#########:cluster/my-cluster to /home/john/.kube/config
Bonus, use kubectx to switch kubectl contexts
$ kubectx
>> arn:aws:eks:us-west-2:#########:cluster/my-cluster-two arn:aws:eks:us-east-1:#####:cluster/my-cluster
$ kubectx arn:aws:eks:us-east-1:#####:cluster/my-cluster
>> Switched to context "arn:aws:eks:us-east-1:#####:cluster/my-cluster".
Ref: https://docs.aws.amazon.com/eks/latest/userguide/getting-started-console.html
After going over the comments I think it seems that you:
Have created the cluster with the root user.
Then created an IAM user and created AWS credentials (AWS_ACCESS_KEY_ID andAWS_SECRET_ACCESS_KEY) for it.
Used these access and secret key in your kubeconfig settings (doesn't matter how - there are multiple ways for that).
And here is the problem as described in the docs:
If you receive one of the following errors while running kubectl
commands, then your kubectl is not configured properly for Amazon EKS
or the IAM user or role credentials that you are using do not map to a
Kubernetes RBAC user with sufficient permissions in your Amazon EKS
cluster.
could not get token: AccessDenied: Access denied
error: You must be logged in to the server (Unauthorized)
error: the server doesn't have a resource type "svc" <--- Your case
This could be because the cluster was created with one set of AWS
credentials (from an IAM user or role), and kubectl is using a
different set of credentials.
When an Amazon EKS cluster is created, the IAM entity (user or role)
that creates the cluster is added to the Kubernetes RBAC authorization
table as the administrator (with system:masters permissions).
Initially, only that IAM user can make calls to the Kubernetes API
server using kubectl.
For more information, see Managing users or IAM
roles for your cluster. If you use the console to create the cluster,
you must ensure that the same IAM user credentials are in the AWS SDK
credential chain when you are running kubectl commands on your
cluster.
This is the cause for the errors.
As the accepted answer described - you'll need to edit aws-auth in order to manage users or IAM roles for your cluster.
Once you have setup the aws config on your system, check the current identity to verify that you're using the correct credentials that have permissions for the Amazon EKS cluster:
aws sts get-caller-identity
Afterwards use:
aws eks --region region update-kubeconfig --name cluster_name
This will create kubeconfig at your home path with required kubernetes API server url at $HOME/.kube/config.
Afterwards you can follow the kubectl instructions for installation and this should work.
For those working with multiple profiles in aws cli.
Here is what my setup looks like:
~/.aws/credentials file:
1 [prod]
2 aws_access_key_id=****
3 aws_secret_access_key=****
4 region=****
11 [dev]
12 aws_access_key_id=****
13 aws_secret_access_key=****
I have two aws profiles prod and dev.
Generate kubeconfig for both prod and dev clusters using
$ aws eks --region <region> update-kubeconfig --name <cluster_name> --profile dev
$ aws eks --region <region> update-kubeconfig --name <cluster_name> --profile prod
This profile metadata is stored in the config file (~/.kube/config) as well.
Use kubectx to view/change current cluster, and kubens to switch namespace within cluster.
$ kubectx
arn:aws:eks:region:accontid:cluster/dev
arn:aws:eks:region:accontid:cluster/prod
Switch to dev cluster.
$ kubectx arn:aws:eks:region:accountid:cluster/dev
Switched to context "arn:aws:eks:region:accountid:cluster/dev".
Similarly we can view/change namespace in the current cluster using kubens.
please use your updated secret key & access key id for connection with EKS cluster.

Setting up AWS EKS - Don't know username and password for config

I'm having an extremely hard time setting up EKS on AWS. I've followed this tutorial: https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html#eks-launch-workers
I got up to the ~/.kube/config file and when I try to run kubectl get svc I'm prompted with the below.
▶ kubectl get svc
Please enter Username: Alex
Please enter Password: ********
Error from server (Forbidden): services is forbidden: User
"system:anonymous" cannot list services in the namespace "default"
I'm unsure where to find the username and password for this entry. Please point me to the exact place where I can find this information.
I think this also has to do with EKS RBAC. I'm not sure how to get around this without having access to the server.
This issue occurs if your user configuration isn't working in your kubeconfig, or if you are on a version of kubectl less than v1.10
I was getting the same error.
I created the EKS cluster via the aws console, however when I followed the steps in the docs to configure my kubeconfig, I got the same error:
$ kubectl get svc
Please enter Username: JessicaG
Please enter Password: ****************
Error from server (Forbidden): services is forbidden: User "system:anonymous" cannot list services in the namespace "default"
This is what ended up being my problem:
In the AWS Getting Started guide in the section "Step 1: Create Your Amazon EKS Cluster: To create your cluster with the console", it says this:
"You must use IAM user credentials for this step, not root credentials. If you create your Amazon EKS cluster using root credentials, you cannot authenticate to the cluster."
It turned out that I had created the EKS cluster with my root credentials, however I was trying to authenticate with my admin user JessicaG.
My solution:
I re-created the cluster with the admin IAM user JessicaG. To do so here are the steps I took:
1) I configured the default user in my local file ~/.aws/credentials with the user's access keys
$ cat ~/.aws/credentials
[default]
aws_access_key_id = <JessicaG access key>
aws_secret_access_key = <JessicaG secret key>
2) Created an eks cluster from the command line:
aws eks create-cluster --name eksdemo --role-arn <eksRole> --resources-vpc-config subnetIds=<subnets>,securityGroupIds=<securityGrps>
3) Configured kubeconfig:
apiVersion: v1
clusters:
- cluster:
server: REDACTED
certificate-authority-data: REDACTED
name: eksdemo
contexts:
- context:
cluster: eksdemo
user: aws-jessicag
name: eksdemo
current-context: eksdemo
kind: Config
preferences: {}
users:
- name: aws-jessicag
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
command: heptio-authenticator-aws
args:
- "token"
- "-i"
- "eksdemo"
That solved this problem for me.
Make sure you have stable version of kubectl install
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
Also if you getting access denied error then make sure you are using the same IAM user access for kubectl which you used for creating EKS cluster.
When an Amazon EKS cluster is created, the IAM entity (user or role) that creates the
cluster is added to the Kubernetes RBAC authorization table as the administrator
(with system:master permissions. Initially, only that IAM user can make calls to the
Kubernetes API server using kubectl.
If you use the console to create the cluster, you must ensure that the same IAM user
credentials are in the AWS SDK credential chain when you are running kubectl commands
on your cluster.
As pointed out rightly in #monokrome's answer, the issue happens either when your kubectl version is outdated or your user configuration is not updated in kubeconfig file (Default Location: ~/.kube/config).
In my case, the configuration for the cluster was not updated in kubeconfig and the following steps helped me update the same:
Ensure your AWS IAM user to have (at least READ) access to the EKS cluster through proper IAM roles and policies.
Upgrade aws cli to version greater than 1.16.156 and run aws sts get-caller-identity. Ensure that the AWS account number and the user ARN are mentioned correctly in the output.
Run aws eks update-kubeconfig --region region-code --name EKS-cluster-name which automatically updates the kubeconfig.
After successfully executing the above steps you should be able to run kubectl commands without any issues. You can try kubectl get svc to verify the same.

aws ec2 comand works, aws iam command fails

This is an odd one for sure. I have an aws command line user that I've setup with admin privileges in the AWS account. The credentials I generated for the user work when I issue an aws ec2 command. But not when I run an aws iam command.
When I run the aws iam command, this is what I get:
[user#web1:~] #aws iam create-account-alias --account-alias=mcollective
An error occurred (InvalidClientTokenId) when calling the CreateAccountAlias operation: The security token included in the request is invalid.
However when I run an aws ec2 subcommand using the same credentials, I get a success:
[root#web1:~] #aws ec2 describe-instances --profile=mcollective
RESERVATIONS 281933881942 r-0080cb499a0299557
INSTANCES 0 x86_64 146923690580740912 False xen ami-6d1c2007 i-00dcdb6cbff0d7980 t2.micro mcollective 2016-07-27T23:56:50.000Z ip-xxx-xx-xx-xx.ec2.internal xx.xx.xx.xx ec2-xx-xx-xx-xx.compute-1.amazonaws.com xx.xxx.xx.xx /dev/sda1 ebs True subnet-0e734056 hvm vpc-909103f7
BLOCKDEVICEMAPPINGS /dev/sda1
EBS 2016-07-23T01:26:42.000Z False attached vol-0eb52f6a94c5833aa
MONITORING disabled
NETWORKINTERFACES 0e:68:20:c5:fa:23 eni-f78223ec 281933881942 ip-xxx-xx-xx-xx.ec2.internal xxx.xx.xx.xx True in-use subnet-0e734056 vpc-909103f7
ASSOCIATION 281933881942 ec2-xxx-xx-xx-xx.compute-1.amazonaws.com xx.xx.xx.xx
ATTACHMENT 2016-07-23T01:26:41.000Z eni-attach-cbf11a1f True 0 attached
GROUPS sg-b1b3bdca CentOS 7 -x86_64- - with Updates HVM-1602-AutogenByAWSMP-
PRIVATEIPADDRESSES True ip-xxx-xx-xx-xxx.ec2.internal xxx.xx.xx.xx
ASSOCIATION 281933881942 ec2-xx-xx-xx-xx.compute-1.amazonaws.com xx.xx.xx.xx
PLACEMENT us-east-1a default
PRODUCTCODES aw0evgkw8e5c1q413zgy5pjce marketplace
SECURITYGROUPS sg-b1b3bdca CentOS 7 -x86_64- - with Updates HVM-1602-AutogenByAWSMP-
STATE 16 running
TAGS Name mcollective
You have mail in /var/spool/mail/root
[root#web1:~] #
So why the heck are the same credentials working for one set of aws subcommands, but not another? I'm really curious about this one!
This question was answered by #garnaat, but the answer was buried in the comments of the question, so quoting the answer here for those who glaze over it:
Different profiles are being used in each command through use of --profile flag
Explanation:
In the first command
[user#web1:~] #aws iam create-account-alias --account-alias=mcollective
--profile is not specified, so the default aws profile credentials are automatically used
In the second command
aws ec2 describe-instances --profile=mcollective
--profile is specified, which overrides the default profile