I use the clusters on different AWS accounts and every time I want to connect via kubectl I have to do aws configure .
How to make other credentials transferred to kubectl ?
I think that I pass the --kubeconfig, I can pass the credentials, but I can't figure out how to do it
you can manage the different clusters with a single kubeconfig as well.
all you need change context for each account cluster with single file
kubectl config get-contexts
but if you still looking for other approaches then you can create kubeconfig for each account.
export KUBECONFIG=~/.kube/aws-account-1
aws eks update-kubeconfig --name my-clust --region us-east-1 --profile account-1
for second account
export KUBECONFIG=~/.kube/aws-account-2
aws eks update-kubeconfig --name my-clust --region us-east-1 --profile account-2
Now if you want to manage the cluster of account1, set kubeconfig to account1.
export KUBECONFIG=~/.kube/aws-account-1
kubectl get pods
Related
I need to get information such as VPCs, subnets, security groups, etc for many AWS accounts at once. How can I go about this?
One solution is to use a for loop with the AWS CLI. Check out the CLI Documentation for the service that you're wanting to gather information for and find the appropriate commands then use a for loop to loop over the profiles in your ~/.aws/credentials file.
For example, if you're wanting to get the VPCs, subnets, and security groups, those are all described in the EC2 CLI docs.
Here is an example of getting information about those resources and outputting it into the current directory as .json (this assumes you didn't change the default output format when using aws configure
#!/usr/bin/env bash
region=us-east-1
for profile in `grep [[] ~/.aws/credentials | tr -d '[]'`
do
echo "getting vpcs, subnets, and security groups for $profile"
aws ec2 describe-vpcs --region $region --profile $profile > "$profile"_vpcs.json
aws ec2 describe-subnets --region $region --profile $profile > "$profile"_subnets.json
aws ec2 describe-security-groups --region $region --profile $profile > "$profile"_security_groups.json
done
I've been using AWS's ECS CLI to spin clusters of EC2 instances up and down for various tasks. The problem I'm running into is that it seems to be referring to old information that I don't know how to change.
e.g., I just created a cluster, my-second-cluster successfully, and can see it in the AWS console:
$ ecs-cli up --keypair "my-keypair" --capability-iam --size 4 --instance-type t2.micro --port 22 --cluster-config my-second-cluster --ecs-profile a-second-profile
INFO[0001] Using recommended Amazon Linux 2 AMI with ECS Agent 1.45.0 and Docker version 19.03.6-ce
INFO[0001] Created cluster cluster=my-second-cluster region=us-east-1
INFO[0002] Waiting for your cluster resources to be created...
INFO[0002] Cloudformation stack status stackStatus=CREATE_IN_PROGRESS
INFO[0063] Cloudformation stack status stackStatus=CREATE_IN_PROGRESS
INFO[0124] Cloudformation stack status stackStatus=CREATE_IN_PROGRESS
VPC created: vpc-123abc
Security Group created: sg-123abc
Subnet created: subnet-123abc
Subnet created: subnet-123def
Cluster creation succeeded.
...but eci-cli ps returns an error referring to an old cluster:
$ ecs-cli ps
FATA[0000] Error executing 'ps': Cluster 'my-first-cluster' is not active. Ensure that it exists
Specifying the cluster explicitly (ecs-cli ps --cluster my-second-cluster --region us-east-1) returns nothing, even though I see the 4 EC2 instances when I log into the AWS console.
Supporting details:
Before creating this second cluster, I created a second profile and set it to the default. I also set the new cluster to be the default.
$ ecs-cli configure profile --access-key <MY_ACCESS_KEY> --secret-key <MY_SECRET_KEY> --profile-name a-second-profile
$ ecs-cli configure profile default --profile-name a-second-profile
$ ecs-cli configure --cluster my-second-cluster --region us-east-1
INFO[0000] Saved ECS CLI cluster configuration default.
It's unclear to me where these ECS profile and cluster configs are stored (I'd expect to see them as files in ~/.aws, but no), or how to manipulate them beyond the cli commands that don't give great feedback. Any ideas on what I'm missing?
The ECS CLI stores it's credentials at ~/.ecs/credentials.
When you create the initial profile it's name is default and is used by default. When you set a-second-profile to default, it sets the metadata to use a-second-profile by default but you still have a profile named default that points to the original creds.
My guess is that to see the first cluster you need to now specify a profile name since you changed the default. If you didn't give your initial profile a name then it will be default.
ecs-cli ps --ecs-profile default
If you deleted your cluster configuration you may need to add the cluster again and associate to the right profile:
ecs-cli configure --cluster cluster_name --default-launch-type launch_type --region region_name --config-name configuration_name
I hope that makes sense. Hopefully looking at how your commands update ~/.ecs/credentials be helpful.
Some resources:
ECS CLI Configurations
Starting from a ~empty AWS account, I am trying to follow https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html
So that meant I created a VPS stack, then installed aws-iam-authenticator, awscli and kubectl, then created an IAM user with Programmatic access and AmazonEKSAdminPolicy directly
attached.
Then I used the website to create my EKS cluster and used aws configure to set the access key and secret of my IAM user.
aws eks update-kubeconfig --name wr-eks-cluster worked fine, but:
kubectl get svc
error: the server doesn't have a resource type "svc"
I continued anyway, creating my worker nodes stack, and now I'm at a dead-end with:
kubectl apply -f aws-auth-cm.yaml
error: You must be logged in to the server (the server has asked for the client to provide credentials)
aws-iam-authenticator token -i <my cluster name> seems to work fine.
The thing I seem to be missing is that when you create the cluster you specify an IAM role, but when you create the user (according to the guide) you attach a policy. How is my user supposed to have access to this cluster?
Or ultimately, how do I proceed and gain access to my cluster using kubectl?
As mentioned in docs, the AWS IAM user created EKS cluster automatically receives system:master permissions, and it's enough to get kubectl working. You need to use this user credentials (AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY) to access the cluster. In case you didn't create a specific IAM user to create a cluster, then you probably created it using root AWS account. In this case, you can use root user credentials (Creating Access Keys for the Root User).
The main magic is inside aws-auth ConfigMap in your cluster – it contains IAM entities -> kubernetes ServiceAccount mapping.
I'm not sure about how do you pass credentials for the aws-iam-authenticator:
If you have ~/.aws/credentials with aws_profile_of_eks_iam_creator then you can try $ AWS_PROFILE=aws_profile_of_eks_iam_creator kubectl get all --all-namespaces
Also, you can use environment variables $ AWS_ACCESS_KEY_ID=XXX AWS_SECRET_ACCESS_KEY=YYY AWS_DEFAULT_REGION=your-region-1 kubectl get all --all-namespaces
Both of them should work, because kubectl ... will use generated ~/.kube/config that contains aws-iam-authenticator token -i cluster_name command. aws-iam-authenticator uses environment variables or ~/.aws/credentials to give you a token.
Also, this answer may be useful for the understanding of the first EKS user creation.
Here are my steps using the aws-cli
$ export AWS_ACCESS_KEY_ID="something"
$ export AWS_SECRET_ACCESS_KEY="something"
$ export AWS_SESSION_TOKEN="something"
$ aws eks update-kubeconfig \
--region us-west-2 \
--name my-cluster
>> Added new context arn:aws:eks:us-west-2:#########:cluster/my-cluster to /home/john/.kube/config
Bonus, use kubectx to switch kubectl contexts
$ kubectx
>> arn:aws:eks:us-west-2:#########:cluster/my-cluster-two arn:aws:eks:us-east-1:#####:cluster/my-cluster
$ kubectx arn:aws:eks:us-east-1:#####:cluster/my-cluster
>> Switched to context "arn:aws:eks:us-east-1:#####:cluster/my-cluster".
Ref: https://docs.aws.amazon.com/eks/latest/userguide/getting-started-console.html
After going over the comments I think it seems that you:
Have created the cluster with the root user.
Then created an IAM user and created AWS credentials (AWS_ACCESS_KEY_ID andAWS_SECRET_ACCESS_KEY) for it.
Used these access and secret key in your kubeconfig settings (doesn't matter how - there are multiple ways for that).
And here is the problem as described in the docs:
If you receive one of the following errors while running kubectl
commands, then your kubectl is not configured properly for Amazon EKS
or the IAM user or role credentials that you are using do not map to a
Kubernetes RBAC user with sufficient permissions in your Amazon EKS
cluster.
could not get token: AccessDenied: Access denied
error: You must be logged in to the server (Unauthorized)
error: the server doesn't have a resource type "svc" <--- Your case
This could be because the cluster was created with one set of AWS
credentials (from an IAM user or role), and kubectl is using a
different set of credentials.
When an Amazon EKS cluster is created, the IAM entity (user or role)
that creates the cluster is added to the Kubernetes RBAC authorization
table as the administrator (with system:masters permissions).
Initially, only that IAM user can make calls to the Kubernetes API
server using kubectl.
For more information, see Managing users or IAM
roles for your cluster. If you use the console to create the cluster,
you must ensure that the same IAM user credentials are in the AWS SDK
credential chain when you are running kubectl commands on your
cluster.
This is the cause for the errors.
As the accepted answer described - you'll need to edit aws-auth in order to manage users or IAM roles for your cluster.
Once you have setup the aws config on your system, check the current identity to verify that you're using the correct credentials that have permissions for the Amazon EKS cluster:
aws sts get-caller-identity
Afterwards use:
aws eks --region region update-kubeconfig --name cluster_name
This will create kubeconfig at your home path with required kubernetes API server url at $HOME/.kube/config.
Afterwards you can follow the kubectl instructions for installation and this should work.
For those working with multiple profiles in aws cli.
Here is what my setup looks like:
~/.aws/credentials file:
1 [prod]
2 aws_access_key_id=****
3 aws_secret_access_key=****
4 region=****
11 [dev]
12 aws_access_key_id=****
13 aws_secret_access_key=****
I have two aws profiles prod and dev.
Generate kubeconfig for both prod and dev clusters using
$ aws eks --region <region> update-kubeconfig --name <cluster_name> --profile dev
$ aws eks --region <region> update-kubeconfig --name <cluster_name> --profile prod
This profile metadata is stored in the config file (~/.kube/config) as well.
Use kubectx to view/change current cluster, and kubens to switch namespace within cluster.
$ kubectx
arn:aws:eks:region:accontid:cluster/dev
arn:aws:eks:region:accontid:cluster/prod
Switch to dev cluster.
$ kubectx arn:aws:eks:region:accountid:cluster/dev
Switched to context "arn:aws:eks:region:accountid:cluster/dev".
Similarly we can view/change namespace in the current cluster using kubens.
please use your updated secret key & access key id for connection with EKS cluster.
I am trying to get the RDS endpoint to use in user data with cli but unable to figure it out.
I need to get the RDS endpoint to inject into a php file but when I try the following I get:
Unable to locate credentials. You can configure credentials by running "aws configure".
I am building the ec2 and vpc using CLI and need to be able to get RDS endpoint as part of the Userdata.
I tried the following on the EC2 instance itself and I get the above error.
aws rds --region ca-central-1 describe-db-instances --query "DBInstances[*].Endpoint.Address"
Even if I am able to resolve that, I need to be able to get the endpoint to pass as part of the userdata. Is that even possible?
The Unable to locate credentials error says that the AWS Command-Line Interface (CLI) does not have any credentials to call the AWS APIs.
You should assign a role to the EC2 instance with sufficient permission to call describe-db-instances on RDS. See: IAM Roles for Amazon EC2
Then, your User Data can include something like:
#!
RDS=`aws rds --region ca-central-1 describe-db-instances --query "DBInstances[*].Endpoint.Address"`
echo >file $RDS
Or pass it as a parameter:
php $RDS
I have it working with this -
mac=curl -s http://169.254.169.254/latest/meta-data/mac
VPC_ID=curl -s http://169.254.169.254/latest/meta-data/network/interfaces/macs/$mac/vpc-id
aws rds describe-db-instances --region us-east-2 | jq -r --arg VPC_ID "VPC_ID" '.DBInstances[] |select (.DBSubnetGroup.VpcId=="'$VPC_ID'") | .Endpoint.Address'
I am trying to use aws container service as per the documentation in http://docs.aws.amazon.com/AmazonECS/latest/developerguide/ECS_GetStarted.html
The below error is thrown when running the command:
aws ecs list-container-instances --cluster default
You must specify a region. You can also configure your region by running "aws configure".
The documentation does not mention anything about specifying a default region. How do we do it in a console?
I think you need to use for example:
aws ecs list-container-instances --cluster default --region us-east-1
This depends of your region of course.
"You must specify a region" is a not an ECS specific error, it can happen with any AWS API/CLI/SDK command.
For the CLI, either set the AWS_DEFAULT_REGION environment variable. e.g.
export AWS_DEFAULT_REGION=us-east-1
or add it into the command (you will need this every time you use a region-specific command)
AWS_DEFAULT_REGION=us-east-1 aws ecs list-container-instances --cluster default
or set it in the CLI configuration file: ~/.aws/config
[default]
region=us-east-1
or pass/override it with the CLI call:
aws ecs list-container-instances --cluster default --region us-east-1
#1- Run this to configure the region once and for all:
aws configure set region us-east-1 --profile admin
Change admin next to the profile if it's different.
Change us-east-1 if your region is different.
#2- Run your command again:
aws ecs list-container-instances --cluster default
If you have configured all what is needed in .aws/config and .aws/credentials but still have this error - double-check names in square brackets.
It should be [profile myLovelyAccName] in config and [myLovelyAccName] in credentials.
Two points to note:
the word "profile" and one space after - in the config file only
no typos in the acc name!
Just to add to answers by Mr. Dimitrov and Jason, if you are using a specific profile and you have put your region setting there,then for all the requests you need to add
"--profile" option.
For example:
Lets say you have AWS Playground profile, and the ~/.aws/config has [profile playground] which further has something like,
[profile playground]
region=us-east-1
then, use something like below
aws ecs list-container-instances --cluster default --profile playground
I posted too soon however the ways to configure are given in below link
http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html
and way to get access keys are given in below link
http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-set-up.html#cli-signup