error: exec plugin: invalid apiVersion "client.authentication.k8s.io/v1alpha1" in kubectl [duplicate] - kubectl

I was setting up my new Mac for my eks environment.
After the installation of kubectl, aws-iam-authenticator and the kubeconfig file placement in default location. I ran the command kubectl command and got this error mentioned below in command block.
My cluster uses v1alpha1 client auth api version so basically i wanted to use the same one in my Mac as well.
I tried with latest version (1.23.0) of kubectl as well, still the same error. Whereas When i tried to do with aws-iam-authenticator (version 0.5.5) I was not able to download lower version.
Can someone help me to resolve it?
% kubectl version
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.0", GitCommit:"af46c47ce925f4c4ad5cc8d1fca46c7b77d13b38", GitTreeState:"clean", BuildDate:"2020-12-08T17:59:43Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"darwin/amd64"}
Unable to connect to the server: getting credentials: exec plugin is configured to use API version client.authentication.k8s.io/v1alpha1, plugin returned version client.authentication.k8s.io/v1beta1
Thanks and Regards,
Saravana

I have the same problem
You're using aws-iam-authenticator 0.5.5, AWS changed the way it behaves in 0.5.4 to require v1beta1.
It depends on your configuration, but you can try to change the K8s context you're using to v1beta1
by checking your kubeconfig file (usually in ~/.kube/config) from client.authentication.k8s.io/v1alpha1 to client.authentication.k8s.io/v1beta1
Otherwise switch back to aws-iam-authenticator 0.5.3 - you might need to build it from source if you're using the M1 architecture as there's no darwin-arm64 binary built for it

This worked for me using M1 chip
sed -i .bak -e 's/v1alpha1/v1beta1/' ~/.kube/config

I fixed the issue with command below
aws eks update-kubeconfig --name mycluster

I also solved this by updating the apiVersion value in my kube config file (~/.kube/config).
client.authentication.k8s.io/v1alpha1 to client.authentication.k8s.io/v1beta1

Also make sure the AWS CLI version is up-to-date. Otherwise, AWS IAM Authenticator might not work with v1beta1:
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install --update

This might be helpful to fix this issue for those who were using GitHub actions.
For my situation I was using kodermax/kubectl-aws-eks with GitHub actions.
I added the KUBECTL_VERSION and IAM_VERSION environment variables for each steps using kodermax/kubectl-aws-eks to keep them in fixed versions.
- name: deploy to cluster
uses: kodermax/kubectl-aws-eks#master
env:
KUBE_CONFIG_DATA: ${{ secrets.KUBE_CONFIG_DATA_STAGING }}
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
ECR_REPOSITORY: my-app
IMAGE_TAG: ${{ github.sha }
KUBECTL_VERSION: "v1.23.6"
IAM_VERSION: "0.5.3"

Using kubectl 1.21.9 fixed it for me, with asdf:
asdf plugin-add kubectl https://github.com/asdf-community/asdf-kubectl.git
asdf install kubectl 1.21.9
And I would recommend having a .tools-versions file with:
kubectl 1.21.9
This question is a duplicate of error: exec plugin: invalid apiVersion "client.authentication.k8s.io/v1alpha1" CircleCI

Please change the authentication apiVersion from v1alpha1 to v1beta1.
Old
apiVersion: client.authentication.k8s.io/v1alpha1
New
apiVersion: client.authentication.k8s.io/v1beta1

Sometimes this can happen if the Kube cache is corrupted (which happened in my case).
Deleting and recreating the below folder worked for me.
sudo rm -rf $HOME/.kube && mkdir -p $HOME/.kube

Related

kubectl erroring on interactiveMode must be specified

I ran into an error today with kubectl that wasn't too clear. I'm Using aws-iam-authenticator version 0.5.0
_________:~$ kubectl --kubeconfig .kube/config get nodes -n my_nodes
Error in configuration: interactiveMode must be specified for ______ to use exec authentication plugin
Upgrading aws-iam-authenticator to the latest (0.5.9) fixed it.

Istio question, where is pilot-discovery command?

Istio question, where is pilot-discovery command?
i can found. In istio-1.8.0 directory has no command named pilot-discovery.
pilot-discovery command is command used by pilot, which is part of istiod now.
istiod unifies functionality that Pilot, Galley, Citadel and the sidecar injector previously performed, into a single binary.
You can get your istio pods with
kubectl get pods -n istio-system
Use kubectl exec to get into your istiod container with
kubectl exec -ti <istiod-pod-name> -c discovery -n istio-system -- /bin/bash
Use pilot-discovery commands as mentioned in istio documentation.
e.g.
istio-proxy#istiod-f49cbf7c7-fn5fb:/$ pilot-discovery version
version.BuildInfo{Version:"1.8.0", GitRevision:"c87a4c874df27e37a3e6c25fa3d1ef6279685d23", GolangVersion:"go1.15.5", BuildStatus:"Clean", GitTag:"1.8.0-rc.1"}
In case you are interested in the code: https://github.com/istio/istio/blob/release-1.8/pilot/cmd/pilot-discovery/main.go
I compile the binary by myself.
1 download istio project.
2 make build
3 set golang proxy
4 cd out
You will see the binary.

ASP.NET Core app deploy to AWS Beanstalk using GitHub Actions

I am trying to build a CD pipeline using the GitHub Actions and AWS Beanstalk (Linux) for my ASP.NET Core 3.1 app.
I have configured the YML file as follows:
- name: dotnet Build
run: dotnet build src/SLNNAME.sln -c Release --no-restore
- name: dotnet Publish
run: |
dotnet publish src/SLNNAME.Server/SLNAME.Server.Web/SLNNAME.Server.Web.csproj -c Release -o staging_SLNNAME_server -r linux-x64
- name: Build deployment package
run: zip -r staging-server-deploy.zip staging_SLNNAME_server
..
- name: Deploy to AWS Beanstalk
uses: einaregilsson/beanstalk-deploy#v10
with:
aws_access_key: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws_secret_key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
region: ${{ secrets.AWS_REGION }}
application_name: SLNNAME-staging-app-web
environment_name: SLNNAME-staging-server
version_label: "staging-app-web-${{ steps.format-time.outputs.replaced }}"
deployment_package: staging-server-deploy.zip
But an error occurs during the deployment to AWS. In particular, looking at the Beanstalk logs I can read the following error:
[ERROR] An error occurred during execution of command [app-deploy] - [CheckProcfileForDotNetCoreApplication]. Stop running the command. Error: error stat /var/app/staging/staging_SLNNAME_server/SLNNAME.dll: no such file or directory with file /var/app/staging/staging_SLNNAME_server/SLNNAME.dll
Basically, I think it is looking for a DLL with the solution name instead of the project name - SLNName.Server.Web. I wonder yet where it is picking up the solution name, since it is not part of the zip file.
I gave a try with the --self-contained flag as well, but the error is exactly the same.
I have this error even if I try to publish the solution using the AWS toolking Visual Studio extension.
The only way I have found to fix this is to change the project output DLL name to match the solution one, but it doesn't make any sense to me - I might have more problems in future.
Thanks
Warning: This is a workaround, not a solution!
On the project that's failing to deploy, change the "Assembly name" in Project Properties / Application tab, to the name of the DLL it's missing (typically the solution name or the first period-separated part of the namespace).
i.e. "SLNNAME"
Then, redeploy your beanstalk app and it should work.
I have opened a ticket on AWS platform and this issue has been stated as a bug by the support team.
They replied to me with:
I reached out to internal team and informed the team regarding the bug. [...] I have added your voice for this request I raised with internal team to add more weightage on the issue. Currently, I am unable to provide you with ETA [...] you can certainly be assured that the internal team is looking into this issue and fix it as quickly as possible
Hoping they will fix this soon as possible
Today I received an email from AWS support saying the issue has been fixed.
I have tried it and actually it works!
Please be sure to use Linux 2/2.0.1 as platform

bug in codedeploy afterinstall hook?

We have autoscaling set up for our symfony3 application. We are using aws codedeploy to deploy to autoscaling instances.
My appspec.yml file
version: 0.0
os: linux
files:
- source: /
destination: /usr/share/nginx/<some_dir>
hooks:
AfterInstall:
- location: post_deploy.sh
timeout: 180
runas: ubuntu
post_deploy.sh
#!/bin/bash
doc_root=/usr/share/nginx/<some_dir>
current_dir=$PWD
cd $doc_root
sudo -E composer install --no-interaction --no-dev --optimize-autoloader
cd $current_dir
and also exported environment variables for parameters.yml file
when we deploy revision, codedeploy succeed in deployment. But when i access my app through browser, nginx error log says:
PHP Fatal error: Uncaught exception 'Symfony\Component\DependencyInjection\Exception\ParameterNotFoundException' with message 'You have requested a non-existent parameter "database.host". Did you mean one of these: "database_host", "database_port"?
Strange thing is that when i run post_deploy.sh script manually by logging in to my server it executes well and no error afterwards.
I don't know how to deal with it.
Try to change database.host to database_host. that what's indicated in the message.
codedeploy doesn't preserve env var in spite of -E option.
So i pass the env var in command itself, like this
sudo SYMFONY_ENV=$SYMFONY_ENV SYMFONY__DATABASE__NAME=$SYMFONY__DATABASE__NAME SYMFONY__DATABASE__USER=$SYMFONY__DATABASE__USER SYMFONY__DATABASE__HOST=$SYMFONY__DATABASE__HOST SYMFONY__DATABASE__PORT=$SYMFONY__DATABASE__PORT SYMFONY__DATABASE__PASSWORD=$SYMFONY__DATABASE__PASSWORD COMPOSER_HOME=$COMPOSER_HOME composer install --no-interaction --no-dev --optimize-autoloader
worked for me.

Kubernetes on AWS

When running the following command on kube-master (CoreOS):
export KUBERNETES_PROVIDER=aws; wget -q -O - https://get.k8s.io | bash
I get following error:
Can't find aws in PATH, please fix and retry.
I have already set PATH. Can anyopne tell which 'aws' it is searching for? Is it the aws directory in kubernetes repo directory i.e. kubernetes/cluster/aws?
Follow the AWS CLI installation guide and then ensure your PATH is set correctly.
Yes, you are right.
If you set "aws" as KUBERNETES_PROVIDER, Kubernetes will use scripts that reside in kubernetes/cluster/aws. If no KUBERNETES_PROVIDER is set, I believe the default it to rely on gcloud CLI tool.
If you are using Ubuntu OS. run the below command. it will resolve your issue.
apt-get install awscli