Unable to get aws-iam-authenticator in config-map while applying through AWS CodeBuild - amazon-web-services

I am making CICD pipeline, using AWS CodeBuild to build and deploy application(service) to aws eks cluster. I have installed kubectl and aws-iam-authenticator properly,
getting aws instead of aws-iam-authenticator in command
kind: Config 
preferences: {} 
users: 
- name: arn:aws:eks:ap-south-1:*******:cluster/DevCluster 
user: 
exec: 
apiVersion: client.authentication.k8s.io/v1alpha1 
args: 
- eks 
- get-token 
- --cluster-name 
- DevCluster 
command: aws
env: null 
[Container] 2019/05/14 04:32:09 Running command kubectl get svc 
error: the server doesn't have a resource type "svc"
I donot want to edit configmap manually because it comes through pipeline.

As #Priya Rani said in the comments, he found the solution.
There is no issue with configmap file. Its all right.
1) I need to make Cloudformation (cluster+nodeinstance)trusted role to communicate with Codebuild by editing trusted role.
2) Need to add usedata section to communicate node instance with clusters.

Why you don't just load a proper/dedicated kube config file, by setting KUBECONFIG env variable inside your CICD pipeline, like this:
export KUBECONFIG=$KUBECONFIG:~/.kube/config-devel
which would include a right command to use with aws-iam-authenticator:
#
#config-devel
#
...
kind: Config
preferences: {}
users:
- name: aws
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
command: aws-iam-authenticator
args:
- "token"
- "-i"
- "<cluster-name>"

Related

Kubectl and several AWS accounts

I'm using on AWS the service EKS, and I use kubectl in order to manage all the kubernetes infrastructure.
I've several aws profiles, and I need to switch among these ones when needed.
What is the best way to switch from an aws profile to another one, in order to call the same kubectl command but in different aws accounts?
Thanks
In your ~/.kube/config you can set which profile to use.
Example:
users:
- name: arn:aws:eks:us-east-1:########:cluster/production
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- --region
- us-east-1
env:
- name: AWS_PROFILE
value: <my-profile>
command: aws
If you need to change it you could edit the file.
Another option, if you want to be able to change profiles easier is to create separate config files for each profile. Then when you run a command, you can set the KUBECONFIG value: KUBECONFIG=~/.kube/config-alternate kubectl .... That could be aliased for easier use as well. If there is an easier way, I don't know it.

Access remote EKS cluster from an EKS pod, using an assumed IAM role

I've gone over this guide to allow one of the pods running on my EKS cluster to access a remote EKS cluster using kubectl.
I'm currently running a pod using amazon/aws-cli inside my cluster, mounting a service account token which allows me to assume an IAM role configured with kubernetes RBAC according to the guide above. I've made sure that the role is correctly assumed by running aws sts get-caller-identity and this is indeed the case.
I've now installed kubectl and configured kube/config like so -
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: <redacted>
server: <redacted>
name: <cluster-arn>
contexts:
- context:
cluster: <cluster-arn>
user: <cluster-arn>
name: <cluster-arn>
current-context: <cluster-arn>
kind: Config
preferences: {}
users:
- name: <cluster-arn>
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- --region
- us-east-2
- eks
- get-token
- --cluster-name
- <cluster-name>
- --role
- <role-arn>
command: aws
env: null
However, every operation I try to carry out using kubectl results in this error -
error: You must be logged in to the server (Unauthorized)
I've no idea what did I misconfigure, and would appreciate any idea on how to get a more verbose error message.
If the AWS CLI is already using the identity of the role you want, then it's not needed to specify the --role & <role-arn> in the kubeconfig args.
By leaving them in, your role from aws sts get-caller-identity will need to have sts:AssumeRole permissions for the role <role-arn>. If they are the same, then the role needs to be able to assume itself - which is redundant.
So I'd try remove those args from the kubeconfig.yml and see if it helps.

how to deploy helm chart from gitlab to eks?

I created a kubernetes cluster and linked it with eks.
I created also an helm chart and .gitla-ci.yml.
I want to add a new step to deploy my app using helm to the cluster, but I don't find a recent tutorial. All tutorials use gitlab-auto devops.
The image is hosted on gitlab.
How could I do to achieve this task ?
image: docker:latest
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay
SPRING_PROFILES_ACTIVE: test
USER_GITLAB: kosted
APP_NAME: mebooks
REPO: gara-mebooks
MAVEN_CLI_OPTS: "-s .m2/settings.xml --batch-mode"
MAVEN_OPTS: "-Dmaven.repo.local=.m2/repository"
stages:
- deploy
k8s-deploy:
stage: deploy
image: dtzar/helm-kubectl:3.1.2
only:
- develop
script:
# Read certificate stored in $KUBE_CA_PEM variable and save it in a new file
- echo $KUBE_URL
- kubectl config set-cluster gara-eks-cluster --server="$KUBE_URL" --certificate-authority="$KUBE_CA_PEM"
- kubectl get pods
In the gitlab console I got
The connection to the server localhost:8080 was refused - did you
specify the right host or port? Running after_script 00:01 Uploading
artifacts for failed job 00:02 ERROR: Job failed: exit code 1
1 - Create arn role or user on IAM from your aws console
2 - connect to your bastion and add the arn role/user in the ConfigMap aws-auth
you can follow this to understand how it works (you are not the creator of the cluster paragraph) : https://aws.amazon.com/fr/premiumsupport/knowledge-center/eks-api-server-unauthorized-error/
3- In your gitlab ci you just have to add this if it is a user you have created :
k8s-deploy:
stage: deploy
image: you need an image with aws + kubectl + helm
only:
- develop
script:
- aws --version
- aws --profile default configure set aws_access_key_id "your access id"
- aws --profile default configure set aws_secret_access_key "your secret"
- helm version
- aws eks update-kubeconfig --name NAME-OF-YOUR-CLUSTER --region eu-west-3
- helm upgrade init
- helm upgrade --install my-chart ./my-chart-folder
If you created a role note a user, you have just to do:
k8s-deploy:
stage: deploy
image: you need an image with aws + kubectl + helm
only:
- develop
script:
- aws --version
- helm version
- aws eks update-kubeconfig --name NAME-OF-YOUR-CLUSTER --region eu-west-3 -arn
- helm upgrade init
- helm upgrade --install my-chart ./my-chart-folder
Here I am adding my method, which is generic and can be used in any K8S environment without AWS CLI.
First, you need to convert your Kube Config to a base64 string:
cat ~/.kube/config | base64
Add the result string as a variable to your CI/CD pipeline settings of the project/group. In my example I used kube_config. Read more on how to add variables here.
Here is my CI YAML file:
stages:
# - build
# - test
- deploy
variables:
KUBEFOLDER: /root/.kube
KUBECONFIG: $KUBEFOLDER/config
k8s-deploy-job:
stage: deploy
image: dtzar/helm-kubectl:3.5.0
before_script:
- mkdir ${KUBEFOLDER}
- echo ${kube_config} | base64 -d > ${KUBECONFIG}
- helm version
- helm repo update
script:
- echo "Deploying application..."
- kubectl get pods
#- helm upgrade --install my-chart ./my-chart-folder
- echo "Application successfully deployed."
Inspired by:
https://about.gitlab.com/blog/2017/09/21/how-to-create-ci-cd-pipeline-with-autodeploy-to-kubernetes-using-gitlab-and-helm/

How to manage multiple GKE projects in one Google Cloud Account [duplicate]

This question already has answers here:
Run a single kubectl command for a specific project and cluster?
(2 answers)
Closed 2 years ago.
Given a situation where I have three separate GKE instances in different Google Cloud projects under the same billing account, how can I configure kubectl so that the commands I execute with it only apply to a specific cluster?
kubectl access to Kubernetes API servers are managed by configuration contexts.
Here is some documentation for how to do so. In a nutshell, you would stand up multiple Kubernetes clusters and then specify a configuration like so:
apiVersion: v1
kind: Config
preferences: {}
clusters:
- cluster:
name: development
- cluster:
name: scratch
users:
- name: developer
- name: experimenter
contexts:
- context:
name: dev-frontend
- context:
name: dev-storage
- context:
name: exp-scratch
To automatically generate one, you can run the following commands:
# Add cluster details to the file
kubectl config --kubeconfig=config-demo set-cluster development --server=https://1.2.3.4 --certificate-authority=fake-ca-file
kubectl config --kubeconfig=config-demo set-cluster scratch --server=https://5.6.7.8 --insecure-skip-tls-verify
# Add user details to the configuration file
kubectl config --kubeconfig=config-demo set-credentials developer --client-certificate=fake-cert-file --client-key=fake-key-seefile
kubectl config --kubeconfig=config-demo set-credentials experimenter --username=exp --password=some-password
# Add context details to the configuration file
kubectl config --kubeconfig=config-demo set-context dev-frontend --cluster=development --namespace=frontend --user=developer
kubectl config --kubeconfig=config-demo set-context dev-storage --cluster=development --namespace=storage --user=developer
kubectl config --kubeconfig=config-demo set-context exp-scratch --cluster=scratch --namespace=default --user=experimenter
After that, you can safe the context. Then, going forward, when you run a kubectl command, the action will apply to the cluster and namespace listed in the specifeid context. For example:
kubectl config --kubeconfig=config-demo use-context dev-frontend
To then change the context to another one you specified:
kubectl config --kubeconfig=config-demo use-context exp-scratch

EKS AWS: Can't connect Worker Node

I am a bit very stuck on the step of Launching worker node in the AWS EKS guide. And to be honest, at this point, I don't know what's wrong.
When I do kubectl get svc, I get my cluster so that's good news.
I have this in my aws-auth-cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn: arn:aws:iam::Account:role/rolename
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
Here is my config in .kube
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: CERTIFICATE
server: server
name: arn:aws:eks:region:account:cluster/clustername
contexts:
- context:
cluster: arn:aws:eks:region:account:cluster/clustername
user: arn:aws:eks:region:account:cluster/clustername
name: arn:aws:eks:region:account:cluster/clustername
current-context: arn:aws:eks:region:account:cluster/clustername
kind: Config
preferences: {}
users:
- name: arn:aws:eks:region:account:cluster/clustername
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- token
- -i
- clustername
command: aws-iam-authenticator.exe
I have launched an EC2 instance with the advised AMI.
Some things to note :
I launched my cluster with the CLI,
I created some Key Pair,
I am not using the Cloudformation Stack,
I attached those policies to the role of my EC2 : AmazonEKS_CNI_Policy, AmazonEC2ContainerRegistryReadOnly, AmazonEKSWorkerNodePolicy.
It is my first attempt at kubernetes and EKS, so please keep that in mind :). Thanks for your help!
Your config file and auth file looks right. Maybe there is some issue with the security group assignments? Can you share the exact steps that you followed to create the cluster and the worker nodes?
And any special reason why you had to use the CLI instead of the console? I mean if it's your first attempt at EKS, then you should probably try to set up a cluster using the console at least once.
Sometimes for whatever reason aws_auth configmap does not apply automatically. So we need to add them manually. I had this issue, so leaving it here in case it helps someone.
Check to see if you have already applied the aws-auth ConfigMap.
kubectl describe configmap -n kube-system aws-auth
If you receive an error stating "Error from server (NotFound): configmaps "aws-auth" not found", then proceed
Download the configuration map.
curl -o aws-auth-cm.yaml https://s3.us-west-2.amazonaws.com/amazon-eks/cloudformation/2020-10-29/aws-auth-cm.yaml
Open the file with your favorite text editor. Replace <ARN of instance role (not instance profile)> with the Amazon Resource Name (ARN) of the IAM role associated with your nodes, and save the file.
Apply the configuration.
kubectl apply -f aws-auth-cm.yaml
Watch the status of your nodes and wait for them to reach the Ready status.
kubectl get nodes --watch
You can also go to your aws console and find the worker node being added.
Find more info here