Given a scenario where I have two Kubernetes clusters, one hosted on AWS EKS and the other on another cloud provider, I would like to manage the EKS cluster from the other cloud provider. What's the easiest way to authenticate such that I can do this?
Would it be reasonable to generate a kubeconfig, where I embed the result from aws get-token (or something like that) to the cluster on the other cloud provider? Or are these tokens not persistent?
Any help or guidance would be appreciated!
I believe the most correct is the way described in Create a kubeconfig for Amazon EKS
yes, you create kubeconfig with aws eks get-token and later add newly created config to KUBECONFIG environment variable , eg
export KUBECONFIG=$KUBECONFIG:~/.kube/config-aws
or you can add it to .bash_profile for your convenience
echo 'export KUBECONFIG=$KUBECONFIG:~/.kube/config-aws' >> ~/.bash_profile
For detailed steps please refer to provided url.
I had this use case where I needed to work with multi-cloud providers.
So I created kubech to deal with that situation and manage multiple clusters simultaneously.
Assuming that you have a linux platform on the second cloud provider, you can use the following command for generating kube config file:
aws eks update-kubeconfig --region <region-code> --name <cluster-name>
You can change the file using --kubeconfig flag.
Ref: https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html
Related
eks module can generate an output kubeconfig
aws_eks_cluster resource doesn't has this feature.
Why don't add this feature?
This feature is deprecated on v18 of the aws module, from docs:
Support for managing kubeconfig and its associated local_file
resources have been removed; users are able to use the awscli provided
aws eks update-kubeconfig --name <cluster_name> to update their local
kubeconfig as necessary
The terraform eks module exposes that file by default, you can take a look at their files or even use their module. It's relatively easy to setup and works great. Links : eks module, I am not 100% if this is the section for it but you can take the look at their whole repo.
I have a pod which I plan to run under EKS & KOPS managed cluster.
The pod does some calculations and I want to write the results to DynamoDB.
How can I access AWS DynamoDB from it?
Also, say I want to package it using helm, is there an option that all of the required configuration to access AWS would be only pod helm package related without any cluster configuration?
You need AWS IAM Role mapped to a ServiceAccount. Try using this user guide: https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html
also for kops you can use Kiam project, think of it as an IAM proxy https://github.com/uswitch/kiam
I'm using Jenkins Pipeline and Packer to create AMI inside an AWS Account.
The Jenkins uses Kubernetes cluster as slave (using a cloud plugin that allows me to parameter docker pods template),
I have a pipeline that pull git project with the packer template in it and run packer validate command which is a success. Than, it runs packer build and i get the following error:
[1;31mBuild 'Amazon Linux 2 Classic' errored: No valid credential sources found for AWS Builder. Please see https://www.packer.io/docs/builders/amazon.html#specifying-amazon-credentials for more information on providing credentials for the AWS Builder.[0m
I also use Kube2iam to provide roles on my slave containers.
In my packer template, i don't define any aws credentials since I don't want to use it but role. Do you know if I have something to do inside the packer template to indicate the role to use ?
Best Regards,
Tony.
From what I understand, you are running Jenkins inside a Kubernetes cluster running on AWS EC2 instances? If so, the Jenkins agents running the build should be able to read available roles from the metadata of the instance they're running on.
In this case, the process would be to assign the desire IAM role to instances and Kubernetes should be able to handle that.
It's been sometime I've visited all the web pages carrying word "KOps import" but did not find a way to import my manually created K8s cluster. Manually created cluster means "Deployed Infra on AWS using Terraform and Kubernetes using Terraform's provisioner script as Shell script". Now as I see managing the environment manually is a pain, I look forward to move it under KOps. For that I have done the following so far:
Installed aws cli, kubectl and kops in my local machine.
Created KOps user with policies AmazonEC2FullAccess,
AmazonRoute53FullAccess, AmazonS3FullAccess, IAMFullAccess,
AmazonVPCFullAccess and generated access and secret keys.
Configured credentials using aws configure.
Created S3 bucket to store state.
Set env variables like Region and Cluster name.
Finally, ran kops import command as below:
kops import cluster --region ${REGION} --name ${OLD_NAME}
But encountered below error:
Cluster.kops "jjm-prod-use1-kubernetes" not found
Verbosed:
$ kops import cluster --region ${REGION} --name ${OLD_NAME} -v 10
I0131 16:32:12.059651 25683 factory.go:68] state store s3://kops-state-store-jjm
I0131 16:32:13.133145 25683 s3context.go:194] found bucket in region "us-east-1"
I0131 16:32:13.133174 25683 s3fs.go:220] Reading file "s3://kops-state-store-jjm/jjm-prod-use1-kubernetes/config"
Which made me serious about posting this question. Is there any possible way where a K8s cluster created except using kubeup.sh can be brought under the control of KOps ? Please advise.
Note: There's no way I can re-create (destroy and create) the clusters as they are running in production.
EDIT: I know this can be achieved only the cluster was setup using kubeup.sh. But is there any other way ?
That is only possible with cluster bootstrapped via kube-up.sh script as officialy announced in Kops documentation pages. Actually, kube-up.sh has been excluded from the list of supported Kubernetes installation tools for AWS. Although, cluster composed by kube-up.sh provides a lot of customization settings which are specifically applicable to AWS, the initial script uses environmental variables to define these settings. Therefore, I assume that it's quite hard to achieve in your case.
Starting from a ~empty AWS account, I am trying to follow https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html
So that meant I created a VPS stack, then installed aws-iam-authenticator, awscli and kubectl, then created an IAM user with Programmatic access and AmazonEKSAdminPolicy directly
attached.
Then I used the website to create my EKS cluster and used aws configure to set the access key and secret of my IAM user.
aws eks update-kubeconfig --name wr-eks-cluster worked fine, but:
kubectl get svc
error: the server doesn't have a resource type "svc"
I continued anyway, creating my worker nodes stack, and now I'm at a dead-end with:
kubectl apply -f aws-auth-cm.yaml
error: You must be logged in to the server (the server has asked for the client to provide credentials)
aws-iam-authenticator token -i <my cluster name> seems to work fine.
The thing I seem to be missing is that when you create the cluster you specify an IAM role, but when you create the user (according to the guide) you attach a policy. How is my user supposed to have access to this cluster?
Or ultimately, how do I proceed and gain access to my cluster using kubectl?
As mentioned in docs, the AWS IAM user created EKS cluster automatically receives system:master permissions, and it's enough to get kubectl working. You need to use this user credentials (AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY) to access the cluster. In case you didn't create a specific IAM user to create a cluster, then you probably created it using root AWS account. In this case, you can use root user credentials (Creating Access Keys for the Root User).
The main magic is inside aws-auth ConfigMap in your cluster – it contains IAM entities -> kubernetes ServiceAccount mapping.
I'm not sure about how do you pass credentials for the aws-iam-authenticator:
If you have ~/.aws/credentials with aws_profile_of_eks_iam_creator then you can try $ AWS_PROFILE=aws_profile_of_eks_iam_creator kubectl get all --all-namespaces
Also, you can use environment variables $ AWS_ACCESS_KEY_ID=XXX AWS_SECRET_ACCESS_KEY=YYY AWS_DEFAULT_REGION=your-region-1 kubectl get all --all-namespaces
Both of them should work, because kubectl ... will use generated ~/.kube/config that contains aws-iam-authenticator token -i cluster_name command. aws-iam-authenticator uses environment variables or ~/.aws/credentials to give you a token.
Also, this answer may be useful for the understanding of the first EKS user creation.
Here are my steps using the aws-cli
$ export AWS_ACCESS_KEY_ID="something"
$ export AWS_SECRET_ACCESS_KEY="something"
$ export AWS_SESSION_TOKEN="something"
$ aws eks update-kubeconfig \
--region us-west-2 \
--name my-cluster
>> Added new context arn:aws:eks:us-west-2:#########:cluster/my-cluster to /home/john/.kube/config
Bonus, use kubectx to switch kubectl contexts
$ kubectx
>> arn:aws:eks:us-west-2:#########:cluster/my-cluster-two arn:aws:eks:us-east-1:#####:cluster/my-cluster
$ kubectx arn:aws:eks:us-east-1:#####:cluster/my-cluster
>> Switched to context "arn:aws:eks:us-east-1:#####:cluster/my-cluster".
Ref: https://docs.aws.amazon.com/eks/latest/userguide/getting-started-console.html
After going over the comments I think it seems that you:
Have created the cluster with the root user.
Then created an IAM user and created AWS credentials (AWS_ACCESS_KEY_ID andAWS_SECRET_ACCESS_KEY) for it.
Used these access and secret key in your kubeconfig settings (doesn't matter how - there are multiple ways for that).
And here is the problem as described in the docs:
If you receive one of the following errors while running kubectl
commands, then your kubectl is not configured properly for Amazon EKS
or the IAM user or role credentials that you are using do not map to a
Kubernetes RBAC user with sufficient permissions in your Amazon EKS
cluster.
could not get token: AccessDenied: Access denied
error: You must be logged in to the server (Unauthorized)
error: the server doesn't have a resource type "svc" <--- Your case
This could be because the cluster was created with one set of AWS
credentials (from an IAM user or role), and kubectl is using a
different set of credentials.
When an Amazon EKS cluster is created, the IAM entity (user or role)
that creates the cluster is added to the Kubernetes RBAC authorization
table as the administrator (with system:masters permissions).
Initially, only that IAM user can make calls to the Kubernetes API
server using kubectl.
For more information, see Managing users or IAM
roles for your cluster. If you use the console to create the cluster,
you must ensure that the same IAM user credentials are in the AWS SDK
credential chain when you are running kubectl commands on your
cluster.
This is the cause for the errors.
As the accepted answer described - you'll need to edit aws-auth in order to manage users or IAM roles for your cluster.
Once you have setup the aws config on your system, check the current identity to verify that you're using the correct credentials that have permissions for the Amazon EKS cluster:
aws sts get-caller-identity
Afterwards use:
aws eks --region region update-kubeconfig --name cluster_name
This will create kubeconfig at your home path with required kubernetes API server url at $HOME/.kube/config.
Afterwards you can follow the kubectl instructions for installation and this should work.
For those working with multiple profiles in aws cli.
Here is what my setup looks like:
~/.aws/credentials file:
1 [prod]
2 aws_access_key_id=****
3 aws_secret_access_key=****
4 region=****
11 [dev]
12 aws_access_key_id=****
13 aws_secret_access_key=****
I have two aws profiles prod and dev.
Generate kubeconfig for both prod and dev clusters using
$ aws eks --region <region> update-kubeconfig --name <cluster_name> --profile dev
$ aws eks --region <region> update-kubeconfig --name <cluster_name> --profile prod
This profile metadata is stored in the config file (~/.kube/config) as well.
Use kubectx to view/change current cluster, and kubens to switch namespace within cluster.
$ kubectx
arn:aws:eks:region:accontid:cluster/dev
arn:aws:eks:region:accontid:cluster/prod
Switch to dev cluster.
$ kubectx arn:aws:eks:region:accountid:cluster/dev
Switched to context "arn:aws:eks:region:accountid:cluster/dev".
Similarly we can view/change namespace in the current cluster using kubens.
please use your updated secret key & access key id for connection with EKS cluster.