aws ec2 comand works, aws iam command fails - amazon-web-services

This is an odd one for sure. I have an aws command line user that I've setup with admin privileges in the AWS account. The credentials I generated for the user work when I issue an aws ec2 command. But not when I run an aws iam command.
When I run the aws iam command, this is what I get:
[user#web1:~] #aws iam create-account-alias --account-alias=mcollective
An error occurred (InvalidClientTokenId) when calling the CreateAccountAlias operation: The security token included in the request is invalid.
However when I run an aws ec2 subcommand using the same credentials, I get a success:
[root#web1:~] #aws ec2 describe-instances --profile=mcollective
RESERVATIONS 281933881942 r-0080cb499a0299557
INSTANCES 0 x86_64 146923690580740912 False xen ami-6d1c2007 i-00dcdb6cbff0d7980 t2.micro mcollective 2016-07-27T23:56:50.000Z ip-xxx-xx-xx-xx.ec2.internal xx.xx.xx.xx ec2-xx-xx-xx-xx.compute-1.amazonaws.com xx.xxx.xx.xx /dev/sda1 ebs True subnet-0e734056 hvm vpc-909103f7
BLOCKDEVICEMAPPINGS /dev/sda1
EBS 2016-07-23T01:26:42.000Z False attached vol-0eb52f6a94c5833aa
MONITORING disabled
NETWORKINTERFACES 0e:68:20:c5:fa:23 eni-f78223ec 281933881942 ip-xxx-xx-xx-xx.ec2.internal xxx.xx.xx.xx True in-use subnet-0e734056 vpc-909103f7
ASSOCIATION 281933881942 ec2-xxx-xx-xx-xx.compute-1.amazonaws.com xx.xx.xx.xx
ATTACHMENT 2016-07-23T01:26:41.000Z eni-attach-cbf11a1f True 0 attached
GROUPS sg-b1b3bdca CentOS 7 -x86_64- - with Updates HVM-1602-AutogenByAWSMP-
PRIVATEIPADDRESSES True ip-xxx-xx-xx-xxx.ec2.internal xxx.xx.xx.xx
ASSOCIATION 281933881942 ec2-xx-xx-xx-xx.compute-1.amazonaws.com xx.xx.xx.xx
PLACEMENT us-east-1a default
PRODUCTCODES aw0evgkw8e5c1q413zgy5pjce marketplace
SECURITYGROUPS sg-b1b3bdca CentOS 7 -x86_64- - with Updates HVM-1602-AutogenByAWSMP-
STATE 16 running
TAGS Name mcollective
You have mail in /var/spool/mail/root
[root#web1:~] #
So why the heck are the same credentials working for one set of aws subcommands, but not another? I'm really curious about this one!

This question was answered by #garnaat, but the answer was buried in the comments of the question, so quoting the answer here for those who glaze over it:
Different profiles are being used in each command through use of --profile flag
Explanation:
In the first command
[user#web1:~] #aws iam create-account-alias --account-alias=mcollective
--profile is not specified, so the default aws profile credentials are automatically used
In the second command
aws ec2 describe-instances --profile=mcollective
--profile is specified, which overrides the default profile

Related

ec2-user vs root IAM instance profile change

I have a CloudFormation based deployment of an application, which creates an Amazon Linux 2 EC2 instance from a Marketplace AMI with an IAM Instance Profile. In the CloudFormation template, I am running scripts via cfn-init which have some AWS CLI commands in them, like ssm get-parameter, and also am mounting an EFS volume using mount and amazon-efs-utils. This has been working like a charm.
Now I have a customer who is running in their AWS account using the common AMI and CloudFormation templates. For them, the cfn-init is failing because cfn-init is running as root, but root strangely has no IAM privileges and can't run the scripts or the efs-helper based mount, even though the Instance Profile is there. But ec2-user does have IAM privileges from the Instance Profile!
To summarize:
ec2-user does have IAM privileges from the Instance Profile.
# logged in as ec2-user
aws ssm get-parameter --name "/aParameter"
returns a result
root does not have IAM privileges from the Instance Profile.
# logged in as ec2-user
sudo aws ssm get-parameter --name "/aParameter"
An error occurred (ParameterNotFound) when calling the GetParameter operation:
I expected that all users running in the instance would have the Instance Profile as their credentials if they have not explicitly authenticated some other way. I can't see anything in the customer's environment that would cause this - I was thinking maybe a Service Control Policy - but they have none of that.
Has anyone seen this behavior and have a fix?
Thanks so much in advance.....
I have no idea about the relationship between ec2-user and instance profile.
ec2-user is a user on linux.
instance profile is an IAM role attached to an EC2 instance.
I think you can check with other commands like aws s3 ls.
Btw, you are using sudo with root.
Turns out AWS credentials were left in the root user on the AMI. :-/
Asri Badlah's aws iam get-user suggestion showed the problem. Thank you!
The instance profile did not have access to get-user, but sudo get-user did and showed the user from the credentials in /root/.aws.

The on-premises instance could not be registered because the request included an IAM user ARN that has already been used to register an instance

While debugging this question, I went on and
In IAM console at https://console.aws.amazon.com/iam/
1.1. Deleted one role (CodeDeployServiceRole).
1.2. Created a service role.
In S3 console at https://console.aws.amazon.com/s3/
2.1. Emptied and deleted one bucket (tiagocodedeploylightsailbucket).
2.2. Created a new bucket in EU London (eu-west-2).
Back into the IAM console at https://console.aws.amazon.com/iam/
3.1. Deleted one policy (CodeDeployS3BucketPolicy).
3.2. Created a new policy.
Stay in the IAM console at https://console.aws.amazon.com/iam/
4.1. Delete one user (LightSailCodeDeployUser)
4.2. Created a new user (with that same name).
Navigate to the Lightsail home page at https://lightsail.aws.amazon.com/
5.1. Deleted previous instance (codedeploy).
5.2. Created one new instance with Amazon Linux (Amazon_Linux_1) (Note that if I use Amazon Linux 2 then would reach this problem),
using the script
mkdir /etc/codedeploy-agent/
mkdir /etc/codedeploy-agent/conf
cat <<EOT >> /etc/codedeploy-agent/conf/codedeploy.onpremises.yml
---
aws_access_key_id: ACCESS_KEY
aws_secret_access_key: SECRET_KEY
iam_user_arn: arn:aws:iam::525221857828:user/LightSailCodeDeployUser
region: eu-west-2
EOT
wget https://aws-codedeploy-us-west-2.s3.us-west-2.amazonaws.com/latest/install
chmod +x ./install
sudo ./install auto
Checked that CodeDeploy agent is running and then when running the following command in AWS CLI
aws deploy register-on-premises-instance --instance-name Amazon_Linux_1 --iam-user-arn arn:aws:iam::525221857828:user/LightSailCodeDeployUser --region eu-west-2
I get
An error occurred (IamUserArnAlreadyRegisteredException) when calling
the RegisterOnPremisesInstance operation: The on-premises instance
could not be registered because the request included an IAM user ARN
that has already been used to register an instance. Include either a
different IAM user ARN or IAM session ARN in the request, and then try
again.
Even though I deleted the user, created one with the same name and then deleted the other existing instance, the IAM User ARN is still the same
arn:aws:iam::525221857828:user/LightSailCodeDeployUser
To fix it, I've gone back to step 4 and created a user with a different name; then, updated the script for the instance creation, checked if the CodeDeploy agent is running and now when running in AWS CLI
aws deploy register-on-premises-instance --instance-name Amazon_Linux_1 --iam-user-arn arn:aws:iam::525221857828:user/GeneralUser --region eu-west-2
I get the expected result

ecs-cli refers to old cluster after changing default profile; doesn't show EC2 instances

I've been using AWS's ECS CLI to spin clusters of EC2 instances up and down for various tasks. The problem I'm running into is that it seems to be referring to old information that I don't know how to change.
e.g., I just created a cluster, my-second-cluster successfully, and can see it in the AWS console:
$ ecs-cli up --keypair "my-keypair" --capability-iam --size 4 --instance-type t2.micro --port 22 --cluster-config my-second-cluster --ecs-profile a-second-profile
INFO[0001] Using recommended Amazon Linux 2 AMI with ECS Agent 1.45.0 and Docker version 19.03.6-ce
INFO[0001] Created cluster cluster=my-second-cluster region=us-east-1
INFO[0002] Waiting for your cluster resources to be created...
INFO[0002] Cloudformation stack status stackStatus=CREATE_IN_PROGRESS
INFO[0063] Cloudformation stack status stackStatus=CREATE_IN_PROGRESS
INFO[0124] Cloudformation stack status stackStatus=CREATE_IN_PROGRESS
VPC created: vpc-123abc
Security Group created: sg-123abc
Subnet created: subnet-123abc
Subnet created: subnet-123def
Cluster creation succeeded.
...but eci-cli ps returns an error referring to an old cluster:
$ ecs-cli ps
FATA[0000] Error executing 'ps': Cluster 'my-first-cluster' is not active. Ensure that it exists
Specifying the cluster explicitly (ecs-cli ps --cluster my-second-cluster --region us-east-1) returns nothing, even though I see the 4 EC2 instances when I log into the AWS console.
Supporting details:
Before creating this second cluster, I created a second profile and set it to the default. I also set the new cluster to be the default.
$ ecs-cli configure profile --access-key <MY_ACCESS_KEY> --secret-key <MY_SECRET_KEY> --profile-name a-second-profile
$ ecs-cli configure profile default --profile-name a-second-profile
$ ecs-cli configure --cluster my-second-cluster --region us-east-1
INFO[0000] Saved ECS CLI cluster configuration default.
It's unclear to me where these ECS profile and cluster configs are stored (I'd expect to see them as files in ~/.aws, but no), or how to manipulate them beyond the cli commands that don't give great feedback. Any ideas on what I'm missing?
The ECS CLI stores it's credentials at ~/.ecs/credentials.
When you create the initial profile it's name is default and is used by default. When you set a-second-profile to default, it sets the metadata to use a-second-profile by default but you still have a profile named default that points to the original creds.
My guess is that to see the first cluster you need to now specify a profile name since you changed the default. If you didn't give your initial profile a name then it will be default.
ecs-cli ps --ecs-profile default
If you deleted your cluster configuration you may need to add the cluster again and associate to the right profile:
ecs-cli configure --cluster cluster_name --default-launch-type launch_type --region region_name --config-name configuration_name
I hope that makes sense. Hopefully looking at how your commands update ~/.ecs/credentials be helpful.
Some resources:
ECS CLI Configurations

InvalidClientTokenId when calling get-caller-identity on an AWS EC2 instance with instance profile

We're having an issue where we're on a CentOS EC2 instance that is using a role through an attached instance profile. When we're on the console after SSHing in, we run the python awscli command line tool to get our identity:
$ aws sts get-caller-identity
we're getting
An error occurred (InvalidClientTokenId) when calling the GetCallerIdentity operation: The security token included in the request is invalid
other commands, such as aws ec2 describe-instances work and are allowed by the instance profile.
From reading the AWS documentation, no permissions should be required to get-caller-identity and there's no explicit deny set on the role associated with instance.
We checked and there's no .aws/credentials file and no env variables set, so access should be entirely managed through the metadata service on the EC2 instance.
Is there something missing in our setup or invocation of the awscli that might cause the permission to fail?
Just documenting the fix for anyone that runs into this issue.
All calls to the awscli should probably include a --region <region> parameter.
E.g.
$ aws sts get-caller-identity --region us-east-2
We were prompted for the region on our aws ec2 describe-instances call but on the aws sts get-caller-identity call, it just failed.
Additionally, we found that the AWS_REGION environment variable didn't seem to affect calls: we still needed to include the --region <region> parameter.

How do you get kubectl to log in to an AWS EKS cluster?

Starting from a ~empty AWS account, I am trying to follow https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html
So that meant I created a VPS stack, then installed aws-iam-authenticator, awscli and kubectl, then created an IAM user with Programmatic access and AmazonEKSAdminPolicy directly
attached.
Then I used the website to create my EKS cluster and used aws configure to set the access key and secret of my IAM user.
aws eks update-kubeconfig --name wr-eks-cluster worked fine, but:
kubectl get svc
error: the server doesn't have a resource type "svc"
I continued anyway, creating my worker nodes stack, and now I'm at a dead-end with:
kubectl apply -f aws-auth-cm.yaml
error: You must be logged in to the server (the server has asked for the client to provide credentials)
aws-iam-authenticator token -i <my cluster name> seems to work fine.
The thing I seem to be missing is that when you create the cluster you specify an IAM role, but when you create the user (according to the guide) you attach a policy. How is my user supposed to have access to this cluster?
Or ultimately, how do I proceed and gain access to my cluster using kubectl?
As mentioned in docs, the AWS IAM user created EKS cluster automatically receives system:master permissions, and it's enough to get kubectl working. You need to use this user credentials (AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY) to access the cluster. In case you didn't create a specific IAM user to create a cluster, then you probably created it using root AWS account. In this case, you can use root user credentials (Creating Access Keys for the Root User).
The main magic is inside aws-auth ConfigMap in your cluster – it contains IAM entities -> kubernetes ServiceAccount mapping.
I'm not sure about how do you pass credentials for the aws-iam-authenticator:
If you have ~/.aws/credentials with aws_profile_of_eks_iam_creator then you can try $ AWS_PROFILE=aws_profile_of_eks_iam_creator kubectl get all --all-namespaces
Also, you can use environment variables $ AWS_ACCESS_KEY_ID=XXX AWS_SECRET_ACCESS_KEY=YYY AWS_DEFAULT_REGION=your-region-1 kubectl get all --all-namespaces
Both of them should work, because kubectl ... will use generated ~/.kube/config that contains aws-iam-authenticator token -i cluster_name command. aws-iam-authenticator uses environment variables or ~/.aws/credentials to give you a token.
Also, this answer may be useful for the understanding of the first EKS user creation.
Here are my steps using the aws-cli
$ export AWS_ACCESS_KEY_ID="something"
$ export AWS_SECRET_ACCESS_KEY="something"
$ export AWS_SESSION_TOKEN="something"
$ aws eks update-kubeconfig \
--region us-west-2 \
--name my-cluster
>> Added new context arn:aws:eks:us-west-2:#########:cluster/my-cluster to /home/john/.kube/config
Bonus, use kubectx to switch kubectl contexts
$ kubectx
>> arn:aws:eks:us-west-2:#########:cluster/my-cluster-two arn:aws:eks:us-east-1:#####:cluster/my-cluster
$ kubectx arn:aws:eks:us-east-1:#####:cluster/my-cluster
>> Switched to context "arn:aws:eks:us-east-1:#####:cluster/my-cluster".
Ref: https://docs.aws.amazon.com/eks/latest/userguide/getting-started-console.html
After going over the comments I think it seems that you:
Have created the cluster with the root user.
Then created an IAM user and created AWS credentials (AWS_ACCESS_KEY_ID andAWS_SECRET_ACCESS_KEY) for it.
Used these access and secret key in your kubeconfig settings (doesn't matter how - there are multiple ways for that).
And here is the problem as described in the docs:
If you receive one of the following errors while running kubectl
commands, then your kubectl is not configured properly for Amazon EKS
or the IAM user or role credentials that you are using do not map to a
Kubernetes RBAC user with sufficient permissions in your Amazon EKS
cluster.
could not get token: AccessDenied: Access denied
error: You must be logged in to the server (Unauthorized)
error: the server doesn't have a resource type "svc" <--- Your case
This could be because the cluster was created with one set of AWS
credentials (from an IAM user or role), and kubectl is using a
different set of credentials.
When an Amazon EKS cluster is created, the IAM entity (user or role)
that creates the cluster is added to the Kubernetes RBAC authorization
table as the administrator (with system:masters permissions).
Initially, only that IAM user can make calls to the Kubernetes API
server using kubectl.
For more information, see Managing users or IAM
roles for your cluster. If you use the console to create the cluster,
you must ensure that the same IAM user credentials are in the AWS SDK
credential chain when you are running kubectl commands on your
cluster.
This is the cause for the errors.
As the accepted answer described - you'll need to edit aws-auth in order to manage users or IAM roles for your cluster.
Once you have setup the aws config on your system, check the current identity to verify that you're using the correct credentials that have permissions for the Amazon EKS cluster:
aws sts get-caller-identity
Afterwards use:
aws eks --region region update-kubeconfig --name cluster_name
This will create kubeconfig at your home path with required kubernetes API server url at $HOME/.kube/config.
Afterwards you can follow the kubectl instructions for installation and this should work.
For those working with multiple profiles in aws cli.
Here is what my setup looks like:
~/.aws/credentials file:
1 [prod]
2 aws_access_key_id=****
3 aws_secret_access_key=****
4 region=****
11 [dev]
12 aws_access_key_id=****
13 aws_secret_access_key=****
I have two aws profiles prod and dev.
Generate kubeconfig for both prod and dev clusters using
$ aws eks --region <region> update-kubeconfig --name <cluster_name> --profile dev
$ aws eks --region <region> update-kubeconfig --name <cluster_name> --profile prod
This profile metadata is stored in the config file (~/.kube/config) as well.
Use kubectx to view/change current cluster, and kubens to switch namespace within cluster.
$ kubectx
arn:aws:eks:region:accontid:cluster/dev
arn:aws:eks:region:accontid:cluster/prod
Switch to dev cluster.
$ kubectx arn:aws:eks:region:accountid:cluster/dev
Switched to context "arn:aws:eks:region:accountid:cluster/dev".
Similarly we can view/change namespace in the current cluster using kubens.
please use your updated secret key & access key id for connection with EKS cluster.