Helm on AWS EKS - amazon-web-services

Having my cluster up and running on AWS EKS, I'm finding trouble running helm init with the following error:
$ helm init --service-account tiller --upgrade
Error: error installing: deployments.extensions is forbidden: User "system:anonymous" cannot create deployments.extensions in the namespace "kube-system"
kubectl works properly (object retrieval, creation and cluster administration), authenticating and authorizing correctly by running heptio-authenticator-aws at connection time ( with an exec section in the kubectl config).
In order to prepare the cluster for helm, I created the service account and role binding as specified in the helm docs.
I've heard of people having helm running on EKS, and I'm guessing they're skipping the exec section of the kubectl config by hardcoding the token... I'd like to avoid that!
Any ideas on how to fix this? My guess is that it is related to helm not being able to execute heptio-authenticator-aws properly

I was running helm version 2.8.2 when obtaining this error, upgrading to v2.9.1 fixed this!

Related

Stack driver adapter installation on istio

We have installed Istio manually on GKE cluster. We want to install/add istio stack driver adapter so that Istio metrics are available on Stack Driver monitoring Dashboard of GCP. I am not able to get the metrics despite add the CRD as mentioned in
https://github.com/GoogleCloudPlatform/istio-samples/blob/master/common/install_istio.sh
git clone https://github.com/istio/installer && cd installer
helm template istio-telemetry/mixer-telemetry --execute=templates/stackdriver.yaml -f global.yaml --set mixer.adapters.stackdriver.enabled=true --namespace istio-system | kubectl apply -f -
I feel we are missing the authentication part. Can anyone help in resolving this?
I was unable to replicate your set up and I notice that the Istio version downloaded by the script was 1.4.2 which is not supported by GKE at this moment.
Nonetheless, I’d recommend you to check this document for troubleshooting and consult this guide to get Istio installed on GKE.
You should also be aware of couple of limitations when using Istio on GKE

Kubectl command throwing error: Unable to connect to the server: getting credentials: exec: exit status 2

I am doing a lab setup of EKS/Kubectl and after the completion cluster build, I run the following:
> kubectl get node
And I get the following error:
Unable to connect to the server: getting credentials: exec: exit status 2
Moreover, I am sure it is a configuration issue for,
kubectl version
usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
To see help text, you can run:
aws help
aws <command> help
aws <command> <subcommand> help
aws: error: argument operation: Invalid choice, valid choices are:
create-cluster | delete-cluster
describe-cluster | describe-update
list-clusters | list-updates
update-cluster-config | update-cluster-version
update-kubeconfig | wait
help
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.1", GitCommit:"d224476cd0730baca2b6e357d144171ed74192d6", GitTreeState:"clean", BuildDate:"2020-01-14T21:04:32Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"darwin/amd64"}
Unable to connect to the server: getting credentials: exec: exit status 2
Please advise next steps for troubleshooting.
Please delete the cache folder folder present in
~/.aws/cli/cache
For me running kubectl get nodes or kubectl cluster-info gives me the following error.
Unable to connect to the server: getting credentials: exec: executable kubelogin not found
It looks like you are trying to use a client-go credential plugin that is not installed.
To learn more about this feature, consult the documentation available at:
https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins
I did the following to resolve this.
Deleted all of the contents inside ~/.kube/. In my case, its a windows machine, so its C:\Users\nis.kube. Here nis is the user name that I logged into.
Ran the get credentials command as follows.
az aks get-credentials --resource-group terraform-aks-dev --name terraform-aks-dev-aks-cluster --admin
Note --admin in the end. Without it, its giving me the same error.
Now the above two commands are working.
Reference: https://blog.baeke.info/2021/06/03/a-quick-look-at-azure-kubelogin/
Did you have the kubectl configuration file ready?
Normally we put it under ~/.kube/config and the file includes the cluster endpoint, ceritifcate, contexts, admin users, and so on.
Furtherly, read this document: https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html
In my case, as I am using azure (not aws), I had to install "kubelogin" which resolved the issue.
"kubelogin" is a client-go credential (exec) plugin implementing azure authentication. This plugin provides features that are not available in kubectl. It is supported on kubectl v1.11+
Can you check your ~/.kube/config file?
Assume if you have start local cluster using minikube for that if your config is available, you should not be getting the error for server.
Sample config file
apiVersion: v1
clusters:
- cluster:
certificate-authority: /Users/singhvi/.minikube/ca.crt
server: https://127.0.0.1:32772
name: minikube
contexts:
- context:
cluster: minikube
user: minikube
name: minikube
current-context: minikube
kind: Config
preferences: {}
users:
- name: minikube
user:
client-certificate: /Users/singhvi/.minikube/profiles/minikube/client.crt
client-key: /Users/singhvi/.minikube/profiles/minikube/client.key
You need to update/recreate your local kubeconfig. In my case I deleted the whole ~/.kube/config and followed this tutorial:
https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html
Make sure you have installed AWS CLI.
https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html
I had the same problem, the issue was that in my .aws/credentials file there was multiple users, and the user that had the permissions on the cluster of EKS (admin_test) wasn't the default user. So in my case, i made the "admin_test" user as my default user in the CLI using environment variables:
export $AWS_PROFILE='admin_test'
After that, i checked the default user with the command:
aws sts get-caller-identity
Finally, i was able to get the nodes with the kubectl get nodes command.
Reference: https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html
In EKS you can retrieve your kubectl credentials using the following command:
% aws eks update-kubeconfig --name cluster_name
Updated context arn:aws:eks:eu-west-1:xxx:cluster/cluster_name in /Users/theofpa/.kube/config
You can retrieve your cluster name using:
% aws eks list-clusters
{
"clusters": [
"cluster_name"
]
}
I had the same error and solved it by upgrading my awscli to the latest version.
Removing and adding the ~/.aws/credentials file worked to resolve this issue for me.
rm ~/.aws/credentials
touch ~/.aws/credentials

helm install failing on GKE [duplicate]

This question already has answers here:
Not able to create Prometheus in K8S cluster
(2 answers)
Closed 3 years ago.
I am a total GCP Newbie- just created a new account.
I have installed a GKE cluster - it is active, also downloaded the sdk.
I was able to deploy a pod on GKE using kubectl.
Have tiller and helm client installed.
From the CLI when I try running a helm command
>helm install --name testngn ./nginx-test
Error: release testngn failed: namespaces "default" is forbidden: User
"system:serviceaccount:kube-system:default" cannot get resource "namespaces" in API group "" in the namespace "default"
I have given my user "owner" role - so hopefully that is not the issue. But not sure how the CLI identifies the user and permissions (new to me). Also the kubectl -n flag does not work with helm (?)
Most of documentation simply says just do helm init - but that does not provide any permissions to Tiller - so it would fail- unable to execute anything.
Create Service account with cluster-admin role using the rbac-config.yaml.
Then helm init with this service account to provide permissions to Tiller
$ kubectl create -f rbac-config.yaml
serviceaccount "tiller" created
clusterrolebinding "tiller" created
$ helm init --service-account tiller

kops - get wrong kubectl context

I use kops create kubernetes cluster in aws.
I want to validate the cluster using this command:
kops validate cluster
The stdout give me: Using cluster from kubectl context: minikube
I think the problem is the wrong context, but why I kops does not create context for me?
This is my contexts:
kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* minikube minikube minikube
there is no aws kubernetes cluster context.
How do I solve this?
Works like charm
kops export kubecfg --name=clustername.com
kops has set your kubectl context to k9s.finddeepak.com
kops helps you to create, destroy, upgrade and maintain production-grade, highly available Kubernetes clusters from the command line. AWS (Amazon Web Services) is currently officially supported, with GCE in beta support , and VMware vSphere in alpha, and other platforms planned.
Your actual configuration uses minikube config file from the previous installation. And it is fine. It’s useful to have a few
clusters in one config and switch between them.
The extended configuration will be saved into a ~/.kube/config file, you may try:
kops export kubeconfig ${CLUSTER_NAME}

aws kops create cluster errors out as InvalidClientTokenId

I am actually trying to deploy my application using Kubernetes in the AWS Kops. For this i followed the steps given in the AWS workshop tutorial.
https://github.com/aws-samples/aws-workshop-for-kubernetes/tree/master/01-path-basics/101-start-here
I created a AWS Cloud9 environment by logging in as a IAM user and installed kops and other required software's as well. When i try to create the cluster using the following command
kops create cluster --name cs.cluster.k8s.local --zones $AWS_AVAILABILITY_ZONES
--yes
i get an error like below in the cloud9 IDE
error running tasks: deadline exceeded executing task IAMRole/nodes.cs.cluster.k8s.local. Example error: error creating IAMRole: InvalidClientTokenId: The security token included in the request is invalid
status code: 403, request id: 30fe2a97-0fc4-11e8-8c48-0f8441e73bc3
I am not able to find a way to solve this issue. Any help on this would be appreciable.
I found the issue and fixed it. Actually
I did not export the following 2 environment variables in the terminal where I am running create cluster. These 2 below variables are required while creating a cluster using kops
export AWS_ACCESS_KEY_ID=$(aws configure get aws_access_key_id)
export AWS_SECRET_ACCESS_KEY=$(aws configure get aws_secret_access_key)