kubectl erroring on interactiveMode must be specified - kubectl

I ran into an error today with kubectl that wasn't too clear. I'm Using aws-iam-authenticator version 0.5.0
_________:~$ kubectl --kubeconfig .kube/config get nodes -n my_nodes
Error in configuration: interactiveMode must be specified for ______ to use exec authentication plugin

Upgrading aws-iam-authenticator to the latest (0.5.9) fixed it.

Related

error: exec plugin: invalid apiVersion "client.authentication.k8s.io/v1alpha1" in kubectl [duplicate]

I was setting up my new Mac for my eks environment.
After the installation of kubectl, aws-iam-authenticator and the kubeconfig file placement in default location. I ran the command kubectl command and got this error mentioned below in command block.
My cluster uses v1alpha1 client auth api version so basically i wanted to use the same one in my Mac as well.
I tried with latest version (1.23.0) of kubectl as well, still the same error. Whereas When i tried to do with aws-iam-authenticator (version 0.5.5) I was not able to download lower version.
Can someone help me to resolve it?
% kubectl version
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.0", GitCommit:"af46c47ce925f4c4ad5cc8d1fca46c7b77d13b38", GitTreeState:"clean", BuildDate:"2020-12-08T17:59:43Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"darwin/amd64"}
Unable to connect to the server: getting credentials: exec plugin is configured to use API version client.authentication.k8s.io/v1alpha1, plugin returned version client.authentication.k8s.io/v1beta1
Thanks and Regards,
Saravana
I have the same problem
You're using aws-iam-authenticator 0.5.5, AWS changed the way it behaves in 0.5.4 to require v1beta1.
It depends on your configuration, but you can try to change the K8s context you're using to v1beta1
by checking your kubeconfig file (usually in ~/.kube/config) from client.authentication.k8s.io/v1alpha1 to client.authentication.k8s.io/v1beta1
Otherwise switch back to aws-iam-authenticator 0.5.3 - you might need to build it from source if you're using the M1 architecture as there's no darwin-arm64 binary built for it
This worked for me using M1 chip
sed -i .bak -e 's/v1alpha1/v1beta1/' ~/.kube/config
I fixed the issue with command below
aws eks update-kubeconfig --name mycluster
I also solved this by updating the apiVersion value in my kube config file (~/.kube/config).
client.authentication.k8s.io/v1alpha1 to client.authentication.k8s.io/v1beta1
Also make sure the AWS CLI version is up-to-date. Otherwise, AWS IAM Authenticator might not work with v1beta1:
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install --update
This might be helpful to fix this issue for those who were using GitHub actions.
For my situation I was using kodermax/kubectl-aws-eks with GitHub actions.
I added the KUBECTL_VERSION and IAM_VERSION environment variables for each steps using kodermax/kubectl-aws-eks to keep them in fixed versions.
- name: deploy to cluster
uses: kodermax/kubectl-aws-eks#master
env:
KUBE_CONFIG_DATA: ${{ secrets.KUBE_CONFIG_DATA_STAGING }}
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
ECR_REPOSITORY: my-app
IMAGE_TAG: ${{ github.sha }
KUBECTL_VERSION: "v1.23.6"
IAM_VERSION: "0.5.3"
Using kubectl 1.21.9 fixed it for me, with asdf:
asdf plugin-add kubectl https://github.com/asdf-community/asdf-kubectl.git
asdf install kubectl 1.21.9
And I would recommend having a .tools-versions file with:
kubectl 1.21.9
This question is a duplicate of error: exec plugin: invalid apiVersion "client.authentication.k8s.io/v1alpha1" CircleCI
Please change the authentication apiVersion from v1alpha1 to v1beta1.
Old
apiVersion: client.authentication.k8s.io/v1alpha1
New
apiVersion: client.authentication.k8s.io/v1beta1
Sometimes this can happen if the Kube cache is corrupted (which happened in my case).
Deleting and recreating the below folder worked for me.
sudo rm -rf $HOME/.kube && mkdir -p $HOME/.kube

Migrate to updated APIs

I'm getting an error to migrate API from GKE though I'm not using the said API /apis/extensions/v1beta1/ingresses
I ran the command kubectl get deployment [mydeployment] -o yaml and did not find the API in question
It seems an IngressList is that calls the old API. To check you can use following command, this will give you the entire ingress info.
kubectl get --raw /apis/extensions/v1beta1/ingresses | jq
I have same issue but i have upgraded node version from 1.21 to 1.22

Pod not started after sidecar injection manually using istio

I am getting below error while trying to inject istio sidecar container manually to pod.
Kubernetes version v1.21.0
Istio version : 1.8.0
Installation commands:-
kubectl create namespace istio-system
helm install --namespace istio-system istio-base istio/charts/base
helm install --namespace istio-system istiod istio/charts/istio-control/istio-discovery --set global.jwtPolicy=first-party-jwt
In kubectl get events, I can see below error:
Error creating: admission webhook "sidecar-injector.istio.io" denied the request: template: inject:443: function "appendMultusNetwork" not defined
In kube api server logs, below errors are observed:
W0505 02:05:30.750732 1 dispatcher.go:142] rejected by webhook "validation.istio.io": &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"admission webhook \"validation.istio.io\" denied the request: configuration is invalid: gateway must have at least one server", Reason:"", Details:(*v1.StatusDetails)(nil), Code:400}}
Please let me know if any clue on how to resolve this error.
I went over step-by-step installation with official documentation, and could not reproduce your problem.
Here are a few things worth checking:
Did you executed all the commands correctly?
Maybe you run a different version of Istio? You can check by issuing istioctl version command
Maybe you changed something in config files? If you did, what exactly?
Try the latest version of Istio (1.9)

Istio question, where is pilot-discovery command?

Istio question, where is pilot-discovery command?
i can found. In istio-1.8.0 directory has no command named pilot-discovery.
pilot-discovery command is command used by pilot, which is part of istiod now.
istiod unifies functionality that Pilot, Galley, Citadel and the sidecar injector previously performed, into a single binary.
You can get your istio pods with
kubectl get pods -n istio-system
Use kubectl exec to get into your istiod container with
kubectl exec -ti <istiod-pod-name> -c discovery -n istio-system -- /bin/bash
Use pilot-discovery commands as mentioned in istio documentation.
e.g.
istio-proxy#istiod-f49cbf7c7-fn5fb:/$ pilot-discovery version
version.BuildInfo{Version:"1.8.0", GitRevision:"c87a4c874df27e37a3e6c25fa3d1ef6279685d23", GolangVersion:"go1.15.5", BuildStatus:"Clean", GitTag:"1.8.0-rc.1"}
In case you are interested in the code: https://github.com/istio/istio/blob/release-1.8/pilot/cmd/pilot-discovery/main.go
I compile the binary by myself.
1 download istio project.
2 make build
3 set golang proxy
4 cd out
You will see the binary.

WSO2 helm pattern-1 failing with configmaps "apim-conf" already exists

I have followed the steps mentioned here: https://github.com/wso2/kubernetes-apim/tree/master/helm/pattern-1. I am encountering an issue that when I execute:
helm install --name wso2am ~/git/src/github.com/wso2/kubernetes-apim/helm/pattern-1/apim-with-analytics
I receive the following error:
Error: release wso2am failed: configmaps "apim-conf" already exists
This happens on the first time of running the helm install command.
I've deleted the configmaps (kubectl delete configmaps apim-conf) and the release (helm del --purge wso2am), and when I try it again I get the same error.
Any assistance on how to get past this issue would be appreciated.
The issue with this is that there was a second copy of the apim-conf.yaml but named apim-conf.yaml_old. This caused helm to attempt to install apim-conf twice. This is resolved.
You can check the configmaps in the wso2 namespace by using the following command.
kubectl get configmaps -n wso2
Then you can remove the configmap apim-conf as follows.
kubectl delete configmap apim-conf -n wso2