EKS container cannot reach DynamoDb or other AWS services - amazon-web-services

We have deployed alpine images on EKS Fargate nodes, and have also associated a service account to an IAM role which has access to DynamoDb and some other services.
When deploying the containers, we can see that AWS has automatically set these env vars on all containers
AWS_ROLE_ARN=arn:aws:iam::1111111:role/my-role
AWS_WEB_IDENTITY_TOKEN_FILE=/var/run/secrets/eks.amazonaws.com/serviceaccount/token
But if we execute this command with the cli
aws sts get-caller-identity
or
aws dynamodb list-tables
the command simply hangs and does not return any results.
We have followed the docs on setting up the iam roles for the EKS (k8s) service accounts - is there anything more we need to do to check the connectivity from the containers to the DynamoDb for example? (please note, from Lambda or so we can access DynamoDb - an endpoint exists for the necessary services)
When I execute this on the pod:
aws sts assume-role-with-web-identity \ --role-arn $AWS_ROLE_ARN \ --role-session-name mh9test \ --web-identity-token ```
file://$AWS_WEB_IDENTITY_TOKEN_FILE \ --duration-seconds 1000
I get this error: Connect timeout on endpoint URL: "sts.amazonaws.com" which is strange because the vpc endpoint is sts.eu-central-1.amazonaws.com
I can also not ping endpoint address such as ec2.eu-central-1.amazonaws.com
Thanks
Thomas

Related

Is any restful API available in kubernetes api server to generate api access token?

I want to create a Kubernetes API access token using the rest API instead of the command below:
aws eks get-token --cluster-name --region ap-south-1
--role --profile

AWS - ECS load S3 files in entrypoint script

Hi all!
Code: (entrypoint.sh)
printenv
CREDENTIALS=$(curl -s "http://169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI")
ACCESS_KEY_ID=$(echo "$CREDENTIALS" | jq .AccessKeyId)
SECRET_ACCESS_KEY=$(echo "$CREDENTIALS" | jq .SecretAccessKey)
TOKEN=$(echo "$CREDENTIALS" | jq .Token)
export AWS_ACCESS_KEY_ID=$ACCESS_KEY_ID
export AWS_SECRET_ACCESS_KEY=$SECRET_ACCESS_KEY
export AWS_SESSION_TOKEN=$TOKEN
aws s3 cp s3://BUCKET/file.txt /PATH/file.txt
Problem:
I'm trying to fetch AWS S3 files to ECS inspired by:
AWS Documentation
(But I'm fetching from S3 directly, not throught VPC endpoint)
I have configured bucket policy & role policy (that is passed in taskDefinition as taskRoleArn & executionRoleArn)
Locally when I'm fetching with aws cli and passing temporary credentials (that I logged in ECS with printenv command in entrypoint script) everything works fine. I can save files on my pc.
On ECS I have error:
fatal error: An error occurred (403) when calling the HeadObject operation: Forbidden
Where can I find solution? Someone had similar problem?
Frist thing, If you are working inside AWS, It strongly recommended to use AWS ECS service role or ECS task role or EC2 role. you do need to fetch credentials from metadata.
But seems like the current role does have permission to s3 or the entrypoint not exporting properly the Environment variable.
If your container instance has already assing role then do not need to export Accesskey just call the aws s3 cp s3://BUCKET/file.txt /PATH/file.txt and it should work.
IAM Roles for Tasks
With IAM roles for Amazon ECS tasks, you can specify an IAM role that
can be used by the containers in a task. Applications must sign their
AWS API requests with AWS credentials, and this feature provides a
strategy for managing credentials for your applications to use,
similar to the way that Amazon EC2 instance profiles provide
credentials to EC2 instances. Instead of creating and distributing
your AWS credentials to the containers or using the EC2 instance’s
role, you can associate an IAM role with an ECS task definition or
RunTask API operation.
So the when you assign role to ECS task or ECS service your entrypoint will be that simple.
printenv
aws s3 cp s3://BUCKET/file.txt /PATH/file.txt
Also, your export will not work as you are expecting, the best way to pass ENV to container form task definition, export will not in this case.
I will suggest assigning role to ECS task and it should work as you are expecting.

AWS Get VPC per region limit using AWS SDK or CLi

I wanted to add validation to my script before starting the Pod build in AWS.
One of the validation step is to check the # of VPCs in the asked region and the max limit set on the account.
I didn't find any CLI or SDK API to get it.
However there are similar APIs, example to get the max elastic IP per VPC, I can query:
aws ec2 describe-account-attributes
And look for "AttributeName": "default-vpc"
There is a brand new service which is able to do what you want: AWS Service Quotas.
It is currently available in most of the regions.
You can query the VPC service limit using the GetServiceQuota action.
The quota code for the quota VPCs per Region is L-F678F1CE (ARN: arn:aws:servicequotas:<REGION>::vpc/L-F678F1CE).
The service code for the service Amazon Virtual Private Cloud (Amazon VPC) is vpc.
Documentation: https://docs.aws.amazon.com/servicequotas/latest/userguide/intro.html
GetServiceQuota-Command Documentation for the CLI: https://docs.aws.amazon.com/cli/latest/reference/service-quotas/get-service-quota.html
You can use the latest version of the aws cli as follows:
aws service-quotas get-service-quota --service-code 'vpc' --quota-code 'L-F678F1CE'
On Windows cli:
aws service-quotas get-service-quota --service-code vpc --quota-code L-F678F1CE
As long as Trusted Advisor access to the Service Limits category remains free, you can do this:
CHECK_ID=$(aws --region us-east-1 support describe-trusted-advisor-checks --language en --query 'checks[?name==Service Limits].{id:id}[0].id' --output text)
aws support describe-trusted-advisor-check-result --check-id $CHECK_ID --query 'result.sort_by(flaggedResources[?status!="ok"],&metadata[2])[].metadata' --output table --region us-east-1
CHECK_ID is currently eW7HH0l7J9

How do you get kubectl to log in to an AWS EKS cluster?

Starting from a ~empty AWS account, I am trying to follow https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html
So that meant I created a VPS stack, then installed aws-iam-authenticator, awscli and kubectl, then created an IAM user with Programmatic access and AmazonEKSAdminPolicy directly
attached.
Then I used the website to create my EKS cluster and used aws configure to set the access key and secret of my IAM user.
aws eks update-kubeconfig --name wr-eks-cluster worked fine, but:
kubectl get svc
error: the server doesn't have a resource type "svc"
I continued anyway, creating my worker nodes stack, and now I'm at a dead-end with:
kubectl apply -f aws-auth-cm.yaml
error: You must be logged in to the server (the server has asked for the client to provide credentials)
aws-iam-authenticator token -i <my cluster name> seems to work fine.
The thing I seem to be missing is that when you create the cluster you specify an IAM role, but when you create the user (according to the guide) you attach a policy. How is my user supposed to have access to this cluster?
Or ultimately, how do I proceed and gain access to my cluster using kubectl?
As mentioned in docs, the AWS IAM user created EKS cluster automatically receives system:master permissions, and it's enough to get kubectl working. You need to use this user credentials (AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY) to access the cluster. In case you didn't create a specific IAM user to create a cluster, then you probably created it using root AWS account. In this case, you can use root user credentials (Creating Access Keys for the Root User).
The main magic is inside aws-auth ConfigMap in your cluster – it contains IAM entities -> kubernetes ServiceAccount mapping.
I'm not sure about how do you pass credentials for the aws-iam-authenticator:
If you have ~/.aws/credentials with aws_profile_of_eks_iam_creator then you can try $ AWS_PROFILE=aws_profile_of_eks_iam_creator kubectl get all --all-namespaces
Also, you can use environment variables $ AWS_ACCESS_KEY_ID=XXX AWS_SECRET_ACCESS_KEY=YYY AWS_DEFAULT_REGION=your-region-1 kubectl get all --all-namespaces
Both of them should work, because kubectl ... will use generated ~/.kube/config that contains aws-iam-authenticator token -i cluster_name command. aws-iam-authenticator uses environment variables or ~/.aws/credentials to give you a token.
Also, this answer may be useful for the understanding of the first EKS user creation.
Here are my steps using the aws-cli
$ export AWS_ACCESS_KEY_ID="something"
$ export AWS_SECRET_ACCESS_KEY="something"
$ export AWS_SESSION_TOKEN="something"
$ aws eks update-kubeconfig \
--region us-west-2 \
--name my-cluster
>> Added new context arn:aws:eks:us-west-2:#########:cluster/my-cluster to /home/john/.kube/config
Bonus, use kubectx to switch kubectl contexts
$ kubectx
>> arn:aws:eks:us-west-2:#########:cluster/my-cluster-two arn:aws:eks:us-east-1:#####:cluster/my-cluster
$ kubectx arn:aws:eks:us-east-1:#####:cluster/my-cluster
>> Switched to context "arn:aws:eks:us-east-1:#####:cluster/my-cluster".
Ref: https://docs.aws.amazon.com/eks/latest/userguide/getting-started-console.html
After going over the comments I think it seems that you:
Have created the cluster with the root user.
Then created an IAM user and created AWS credentials (AWS_ACCESS_KEY_ID andAWS_SECRET_ACCESS_KEY) for it.
Used these access and secret key in your kubeconfig settings (doesn't matter how - there are multiple ways for that).
And here is the problem as described in the docs:
If you receive one of the following errors while running kubectl
commands, then your kubectl is not configured properly for Amazon EKS
or the IAM user or role credentials that you are using do not map to a
Kubernetes RBAC user with sufficient permissions in your Amazon EKS
cluster.
could not get token: AccessDenied: Access denied
error: You must be logged in to the server (Unauthorized)
error: the server doesn't have a resource type "svc" <--- Your case
This could be because the cluster was created with one set of AWS
credentials (from an IAM user or role), and kubectl is using a
different set of credentials.
When an Amazon EKS cluster is created, the IAM entity (user or role)
that creates the cluster is added to the Kubernetes RBAC authorization
table as the administrator (with system:masters permissions).
Initially, only that IAM user can make calls to the Kubernetes API
server using kubectl.
For more information, see Managing users or IAM
roles for your cluster. If you use the console to create the cluster,
you must ensure that the same IAM user credentials are in the AWS SDK
credential chain when you are running kubectl commands on your
cluster.
This is the cause for the errors.
As the accepted answer described - you'll need to edit aws-auth in order to manage users or IAM roles for your cluster.
Once you have setup the aws config on your system, check the current identity to verify that you're using the correct credentials that have permissions for the Amazon EKS cluster:
aws sts get-caller-identity
Afterwards use:
aws eks --region region update-kubeconfig --name cluster_name
This will create kubeconfig at your home path with required kubernetes API server url at $HOME/.kube/config.
Afterwards you can follow the kubectl instructions for installation and this should work.
For those working with multiple profiles in aws cli.
Here is what my setup looks like:
~/.aws/credentials file:
1 [prod]
2 aws_access_key_id=****
3 aws_secret_access_key=****
4 region=****
11 [dev]
12 aws_access_key_id=****
13 aws_secret_access_key=****
I have two aws profiles prod and dev.
Generate kubeconfig for both prod and dev clusters using
$ aws eks --region <region> update-kubeconfig --name <cluster_name> --profile dev
$ aws eks --region <region> update-kubeconfig --name <cluster_name> --profile prod
This profile metadata is stored in the config file (~/.kube/config) as well.
Use kubectx to view/change current cluster, and kubens to switch namespace within cluster.
$ kubectx
arn:aws:eks:region:accontid:cluster/dev
arn:aws:eks:region:accontid:cluster/prod
Switch to dev cluster.
$ kubectx arn:aws:eks:region:accountid:cluster/dev
Switched to context "arn:aws:eks:region:accountid:cluster/dev".
Similarly we can view/change namespace in the current cluster using kubens.
please use your updated secret key & access key id for connection with EKS cluster.

Error when creating aws emr default-roles

I'm trying to create a cluster using aws cli emr command. However, I can't seem to be able to create-default-roles needed before calling aws emr create-cluster
$ aws emr create-default-roles
A client error (NoSuchEntity) occurred when calling the GetRole operation: Unknown
I have made sure that my user has the following permissions:
IAMFullAccess - AWS Managed policy
AmazonElasticMapReduceforEC2Role - AWS Managed policy
AmazonElasticMapReduceFullAccess - AWS Managed policy
Any tips? Is there a place where I can just copy the roles json and create them manually?
The reason I started to do this is because when I run aws emr create-cluster it returns a cluster-id. But when that cluster-id is queries it state is set to terminated with the error: EMR service role arn:aws:iam::141703095098:role/EMR_DefaultRole is invalid
I DID manage to add these roles using the console by going to:
My Security Credentials > Roles > Create New Role
First Role with the following properties:
name: EMR_DefaultRole
policy: AmazonElasticMapReduceRole
Second Role with the following properties:
name: EMR_EC2_DefaultRole
policy: AmazonElasticMapReduceforEC2Role
Unfortunately I didn't get the command-line to work, but I suspect I might be something to do with my local setup.
I had issues with the console. With the client this worked:
# upgrade aws cli (can't hurt)
pip install --upgrade --user awscli
# aws configure process if you haven't (look it up)
# delete all the defunct shizzles
aws iam remove-role-from-instance-profile --instance-profile-name EMR_EC2_DefaultRole \
--role-name EMR_EC2_DefaultRole
aws iam delete-instance-profile \
--instance-profile-name EMR_EC2_DefaultRole
aws iam detach-role-policy \
--role-name EMR_EC2_DefaultRole \
--policy-arn arn:aws:iam::aws:policy/service-role/AmazonElasticMapReduceforEC2Role
aws iam delete-role --role-name EMR_EC2_DefaultRole
aws iam detach-role-policy --role-name EMR_DefaultRole \
--policy-arn arn:aws:iam::aws:policy/service-role/AmazonElasticMapReduceRole
aws iam delete-role --role-name EMR_DefaultRole
# now re-create them
aws emr create-default-roles
Note if you have attached policies, you might have to go into the console and delete them or find the appropriate aws cli command.
Source (our product is buggy and our role system is cumbersome, but if you buy premium support we'll tell you the workarounds):
https://aws.amazon.com/premiumsupport/knowledge-center/emr-default-role-invalid/