I have followed the steps mentioned here: https://github.com/wso2/kubernetes-apim/tree/master/helm/pattern-1. I am encountering an issue that when I execute:
helm install --name wso2am ~/git/src/github.com/wso2/kubernetes-apim/helm/pattern-1/apim-with-analytics
I receive the following error:
Error: release wso2am failed: configmaps "apim-conf" already exists
This happens on the first time of running the helm install command.
I've deleted the configmaps (kubectl delete configmaps apim-conf) and the release (helm del --purge wso2am), and when I try it again I get the same error.
Any assistance on how to get past this issue would be appreciated.
The issue with this is that there was a second copy of the apim-conf.yaml but named apim-conf.yaml_old. This caused helm to attempt to install apim-conf twice. This is resolved.
You can check the configmaps in the wso2 namespace by using the following command.
kubectl get configmaps -n wso2
Then you can remove the configmap apim-conf as follows.
kubectl delete configmap apim-conf -n wso2
Related
I ran into an error today with kubectl that wasn't too clear. I'm Using aws-iam-authenticator version 0.5.0
_________:~$ kubectl --kubeconfig .kube/config get nodes -n my_nodes
Error in configuration: interactiveMode must be specified for ______ to use exec authentication plugin
Upgrading aws-iam-authenticator to the latest (0.5.9) fixed it.
I was setting up my new Mac for my eks environment.
After the installation of kubectl, aws-iam-authenticator and the kubeconfig file placement in default location. I ran the command kubectl command and got this error mentioned below in command block.
My cluster uses v1alpha1 client auth api version so basically i wanted to use the same one in my Mac as well.
I tried with latest version (1.23.0) of kubectl as well, still the same error. Whereas When i tried to do with aws-iam-authenticator (version 0.5.5) I was not able to download lower version.
Can someone help me to resolve it?
% kubectl version
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.0", GitCommit:"af46c47ce925f4c4ad5cc8d1fca46c7b77d13b38", GitTreeState:"clean", BuildDate:"2020-12-08T17:59:43Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"darwin/amd64"}
Unable to connect to the server: getting credentials: exec plugin is configured to use API version client.authentication.k8s.io/v1alpha1, plugin returned version client.authentication.k8s.io/v1beta1
Thanks and Regards,
Saravana
I have the same problem
You're using aws-iam-authenticator 0.5.5, AWS changed the way it behaves in 0.5.4 to require v1beta1.
It depends on your configuration, but you can try to change the K8s context you're using to v1beta1
by checking your kubeconfig file (usually in ~/.kube/config) from client.authentication.k8s.io/v1alpha1 to client.authentication.k8s.io/v1beta1
Otherwise switch back to aws-iam-authenticator 0.5.3 - you might need to build it from source if you're using the M1 architecture as there's no darwin-arm64 binary built for it
This worked for me using M1 chip
sed -i .bak -e 's/v1alpha1/v1beta1/' ~/.kube/config
I fixed the issue with command below
aws eks update-kubeconfig --name mycluster
I also solved this by updating the apiVersion value in my kube config file (~/.kube/config).
client.authentication.k8s.io/v1alpha1 to client.authentication.k8s.io/v1beta1
Also make sure the AWS CLI version is up-to-date. Otherwise, AWS IAM Authenticator might not work with v1beta1:
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install --update
This might be helpful to fix this issue for those who were using GitHub actions.
For my situation I was using kodermax/kubectl-aws-eks with GitHub actions.
I added the KUBECTL_VERSION and IAM_VERSION environment variables for each steps using kodermax/kubectl-aws-eks to keep them in fixed versions.
- name: deploy to cluster
uses: kodermax/kubectl-aws-eks#master
env:
KUBE_CONFIG_DATA: ${{ secrets.KUBE_CONFIG_DATA_STAGING }}
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
ECR_REPOSITORY: my-app
IMAGE_TAG: ${{ github.sha }
KUBECTL_VERSION: "v1.23.6"
IAM_VERSION: "0.5.3"
Using kubectl 1.21.9 fixed it for me, with asdf:
asdf plugin-add kubectl https://github.com/asdf-community/asdf-kubectl.git
asdf install kubectl 1.21.9
And I would recommend having a .tools-versions file with:
kubectl 1.21.9
This question is a duplicate of error: exec plugin: invalid apiVersion "client.authentication.k8s.io/v1alpha1" CircleCI
Please change the authentication apiVersion from v1alpha1 to v1beta1.
Old
apiVersion: client.authentication.k8s.io/v1alpha1
New
apiVersion: client.authentication.k8s.io/v1beta1
Sometimes this can happen if the Kube cache is corrupted (which happened in my case).
Deleting and recreating the below folder worked for me.
sudo rm -rf $HOME/.kube && mkdir -p $HOME/.kube
I am getting below error while trying to inject istio sidecar container manually to pod.
Kubernetes version v1.21.0
Istio version : 1.8.0
Installation commands:-
kubectl create namespace istio-system
helm install --namespace istio-system istio-base istio/charts/base
helm install --namespace istio-system istiod istio/charts/istio-control/istio-discovery --set global.jwtPolicy=first-party-jwt
In kubectl get events, I can see below error:
Error creating: admission webhook "sidecar-injector.istio.io" denied the request: template: inject:443: function "appendMultusNetwork" not defined
In kube api server logs, below errors are observed:
W0505 02:05:30.750732 1 dispatcher.go:142] rejected by webhook "validation.istio.io": &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"admission webhook \"validation.istio.io\" denied the request: configuration is invalid: gateway must have at least one server", Reason:"", Details:(*v1.StatusDetails)(nil), Code:400}}
Please let me know if any clue on how to resolve this error.
I went over step-by-step installation with official documentation, and could not reproduce your problem.
Here are a few things worth checking:
Did you executed all the commands correctly?
Maybe you run a different version of Istio? You can check by issuing istioctl version command
Maybe you changed something in config files? If you did, what exactly?
Try the latest version of Istio (1.9)
I am using helm upgrade xyz --install command and my release were failing due to other helm issues. So no successful release is done yet.
Then one time when the above command was in progress, I pressed ctrl+c command. And since then it is showing Error: UPGRADE FAILED: another operation (install/upgrade/rollback) is in progress whenever I try helm upgrade again.
When i do helm history helm history xyz It shows Error: release: not found. No I don't know how to roll back the previous operation so that I can try helm upgrade again.
I tried --force too helm upgrade xyz --install --force but it is still showing some operation is in progress.
So how can I roll back the previous process when I don't have any successful release.
The solution is to use helm rollback to restore your previous revision:
helm rollback <name> <revision>
your previously installed/ upgraded helms are in the status of pending-upgrade. These charts were not shown, when we list the helms. try using helm status [chart_name], it shows the current state of charts. uninstall the charts for which the status : pending.
Then you can reinstall the charts without any issue.
OP:
Found the issue.
I was not giving namespace in helm delete command so It was using some default namespace. once i passed namespace it worked
helm -n namespace history myapp
helm -n namespace rollback 7
Here 7 is just the revision to which i rolled back. After this you can proceed with your regular upgrade
In my case, I had to get the status of that release using
helm status <release-name>
Then I saw that the status for that release is pending-upgrade. I simply uninstalled that release using
helm uninstall <release-name>
and ran the install command again.
I am trying to install Kube Prometheus Stack using helm.
I have already setup ingress, so it needs to be running behind a proxy.
For that I have updated values of the chart by using below command.
helm show values prometheus-com/kube-prometheus-stack > values.yaml
I followed this doc and changed configurations,
[server]
domain = example.com
Now I am trying to install using below command.
helm install monitoring ./values.yaml -n monitoring
I have already created a namespace monitoring
I get below error on running above command.
Error: file '/home/user/values.yaml' seems to be a YAML file, but expected a gzipped archive
Your helm command should be something like this:
$ helm install <release-name> <registry-name>/<chart-name> --values ./values.yaml -n monitoring