I am trying to use kustomize from within kubectl. Specifically, I want to know the equivalent kubectl command for:
kustomize build --load_restrictor LoadRestrictionsNone config/overlays/dev_mutation | kubectl apply -f -
(kustomize properly runs this command and does what I expect)
I've tried this command:
$ kubectl apply -k config/overlays/dev_mutation --load_restrictor="LoadRestrictionsNone"
which complains that load_restrictor is deprecated and I should use load-restrictor instead.
W0712 07:58:16.811301 2407909 flags.go:39] load_restrictor is DEPRECATED and will be removed in a future version. Use load-restrictor instead.
Error: unknown flag: --load_restrictor
So, I tried replacing with the non-deprecated flag:
kubectl apply -k config/overlays/dev_mutation --load-restrictor="LoadRestrictionsNone"
If I do this, kubectl complains that --load-restrictor is unknown:
Error: unknown flag: --load-restrictor
How do I properly pass the load_restrictor/load-restrictor flag to kubectl apply -k?
Output of kubectl version:
gatekeeper$ kubectl version
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.2", GitCommit:"092fbfbf53427de67cac1e9fa54aaa09a28371d7", GitTreeState:"clean", BuildDate:"2021-06-16T12:59:11Z", GoVersion:"go1.16.5", Compiler:"gc", Platform:"linux/amd64"}
I suggest installing the kustomize binary directly, instead of relying on the bundled version in kubectl which would be outdated. More info here: Install Kustomize
I do not think you can pass the --load-restrictor option to kubectl apply -k command. Instead, I can confirm that this works
kubectl kustomize --load-restrictor LoadRestrictionsNone <path_to_kustomization_dir>
You can use kustomize binary to achieve the same using
kustomize build --load-restrictor LoadRestrictionsNone <path_to_kustomization_dir>
Applying generated yaml
If you want to apply the generated output using kubectl, you can pipe this output like so
kubectl kustomize --load-restrictor LoadRestrictionsNone <path_to_kustomization_dir> | kubectl apply -f -
or
kustomize build --load-restrictor LoadRestrictionsNone <path_to_kustomization_dir> | kubectl apply -f -
Related
$ minikube kubectl create -f hello-app-deployment.yaml
Error: unknown shorthand flag: 'f' in -f
See 'minikube kubectl --help' for usage.
enter image description here
where hello-app-deployment.yaml is the deployment manifest file being saved in a working directory.
I tried saving the same manifest file in my home directory, but encountering the same ERROR.
Is there any minikube or kubectl libaries missing ?
I would say, try installing the kubectl CLI and try with this command
kubectl apply -f ~/<filepath>
You can download the tool from the official website:
https://kubernetes.io/docs/tasks/tools/
I ran into an error today with kubectl that wasn't too clear. I'm Using aws-iam-authenticator version 0.5.0
_________:~$ kubectl --kubeconfig .kube/config get nodes -n my_nodes
Error in configuration: interactiveMode must be specified for ______ to use exec authentication plugin
Upgrading aws-iam-authenticator to the latest (0.5.9) fixed it.
I was setting up my new Mac for my eks environment.
After the installation of kubectl, aws-iam-authenticator and the kubeconfig file placement in default location. I ran the command kubectl command and got this error mentioned below in command block.
My cluster uses v1alpha1 client auth api version so basically i wanted to use the same one in my Mac as well.
I tried with latest version (1.23.0) of kubectl as well, still the same error. Whereas When i tried to do with aws-iam-authenticator (version 0.5.5) I was not able to download lower version.
Can someone help me to resolve it?
% kubectl version
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.0", GitCommit:"af46c47ce925f4c4ad5cc8d1fca46c7b77d13b38", GitTreeState:"clean", BuildDate:"2020-12-08T17:59:43Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"darwin/amd64"}
Unable to connect to the server: getting credentials: exec plugin is configured to use API version client.authentication.k8s.io/v1alpha1, plugin returned version client.authentication.k8s.io/v1beta1
Thanks and Regards,
Saravana
I have the same problem
You're using aws-iam-authenticator 0.5.5, AWS changed the way it behaves in 0.5.4 to require v1beta1.
It depends on your configuration, but you can try to change the K8s context you're using to v1beta1
by checking your kubeconfig file (usually in ~/.kube/config) from client.authentication.k8s.io/v1alpha1 to client.authentication.k8s.io/v1beta1
Otherwise switch back to aws-iam-authenticator 0.5.3 - you might need to build it from source if you're using the M1 architecture as there's no darwin-arm64 binary built for it
This worked for me using M1 chip
sed -i .bak -e 's/v1alpha1/v1beta1/' ~/.kube/config
I fixed the issue with command below
aws eks update-kubeconfig --name mycluster
I also solved this by updating the apiVersion value in my kube config file (~/.kube/config).
client.authentication.k8s.io/v1alpha1 to client.authentication.k8s.io/v1beta1
Also make sure the AWS CLI version is up-to-date. Otherwise, AWS IAM Authenticator might not work with v1beta1:
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install --update
This might be helpful to fix this issue for those who were using GitHub actions.
For my situation I was using kodermax/kubectl-aws-eks with GitHub actions.
I added the KUBECTL_VERSION and IAM_VERSION environment variables for each steps using kodermax/kubectl-aws-eks to keep them in fixed versions.
- name: deploy to cluster
uses: kodermax/kubectl-aws-eks#master
env:
KUBE_CONFIG_DATA: ${{ secrets.KUBE_CONFIG_DATA_STAGING }}
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
ECR_REPOSITORY: my-app
IMAGE_TAG: ${{ github.sha }
KUBECTL_VERSION: "v1.23.6"
IAM_VERSION: "0.5.3"
Using kubectl 1.21.9 fixed it for me, with asdf:
asdf plugin-add kubectl https://github.com/asdf-community/asdf-kubectl.git
asdf install kubectl 1.21.9
And I would recommend having a .tools-versions file with:
kubectl 1.21.9
This question is a duplicate of error: exec plugin: invalid apiVersion "client.authentication.k8s.io/v1alpha1" CircleCI
Please change the authentication apiVersion from v1alpha1 to v1beta1.
Old
apiVersion: client.authentication.k8s.io/v1alpha1
New
apiVersion: client.authentication.k8s.io/v1beta1
Sometimes this can happen if the Kube cache is corrupted (which happened in my case).
Deleting and recreating the below folder worked for me.
sudo rm -rf $HOME/.kube && mkdir -p $HOME/.kube
Istio question, where is pilot-discovery command?
i can found. In istio-1.8.0 directory has no command named pilot-discovery.
pilot-discovery command is command used by pilot, which is part of istiod now.
istiod unifies functionality that Pilot, Galley, Citadel and the sidecar injector previously performed, into a single binary.
You can get your istio pods with
kubectl get pods -n istio-system
Use kubectl exec to get into your istiod container with
kubectl exec -ti <istiod-pod-name> -c discovery -n istio-system -- /bin/bash
Use pilot-discovery commands as mentioned in istio documentation.
e.g.
istio-proxy#istiod-f49cbf7c7-fn5fb:/$ pilot-discovery version
version.BuildInfo{Version:"1.8.0", GitRevision:"c87a4c874df27e37a3e6c25fa3d1ef6279685d23", GolangVersion:"go1.15.5", BuildStatus:"Clean", GitTag:"1.8.0-rc.1"}
In case you are interested in the code: https://github.com/istio/istio/blob/release-1.8/pilot/cmd/pilot-discovery/main.go
I compile the binary by myself.
1 download istio project.
2 make build
3 set golang proxy
4 cd out
You will see the binary.
I like to re-run or run a DAG from composer, and below command is what i have used, but i got some exceptions like this
kubeconfig entry generated for europe-west1-leo-stage-bi-db7ea92f-gke.
Executing within the following Kubernetes cluster namespace: composer-1-7-7-airflow-1-10-1-db7ea92f
command terminated with exit code 2
[2020-07-14 12:44:34,472] {settings.py:176} INFO - setting.configure_orm(): Using pool settings. pool_size=5, pool_recycle=1800
[2020-07-14 12:44:35,624] {default_celery.py:80} WARNING - You have configured a result_backend of redis://airflow-redis-service.default.svc.cluster.local:6379/0, it is highly recommended to use an alternative result_backend (i.e. a database).
[2020-07-14 12:44:35,628] {__init__.py:51} INFO - Using executor CeleryExecutor
[2020-07-14 12:44:35,860] {app.py:51} WARNING - Using default Composer Environment Variables. Overrides have not been applied.
[2020-07-14 12:44:35,867] {configuration.py:516} INFO - Reading the config from /etc/airflow/airflow.cfg
[2020-07-14 12:44:35,895] {configuration.py:516} INFO - Reading the config from /etc/airflow/airflow.cfg
usage: airflow [-h]
{backfill,list_tasks,clear,pause,unpause,trigger_dag,delete_dag,pool,variables,kerberos,render,run,initdb,list_dags,dag_state,task_failed_deps,task_state,serve_logs,test,webserver,resetdb,upgradedb,scheduler,worker,flower,version,connections,create_user}
...
airflow: error: unrecognized arguments: --yes
ERROR: (gcloud.composer.environments.run) kubectl returned non-zero status code.
This is my command, the second line I have specified the parameters, can anyone help with this?
Thank you
gcloud composer environments run leo-stage-bi --location=europe-west1 backfill -- regulatory_spain_monthly -s 20190701 -e 20190702 -t "regulatory_spain_rud_monthly_materialization" --reset_dagruns
gcloud composer environments run project-name --location=europe-west1 backfill -- DAG name -s start date -e end date -t task in the DAG --reset_dagruns
I've checked Airflow backfill sub-command functionality within gcloud util from Google Cloud SDK 300.0.0 tools-set and I've finished my test attempts running backfill action with the same error:
airflow: error: unrecognized arguments: --yes
Digging into this issue and launching --verbosity=debug for gcloud composer environments run command, I've realized the cause of this lag:
gcloud composer environments run <ENVIRONMENT> --location=<LOCATION> --verbosity=debug backfill -- <DAG> -s <start_date> -e <end_date> -t "task_id" --reset_dagruns
DEBUG: Executing command: ['/usr/bin/kubectl', '--namespace',
'', 'exec', 'airflow-worker-*', '--stdin', '--tty',
'--container', 'airflow-worker', '--', 'airflow', 'backfill', '',
'-s', '<start_date>', '-e', '<end_date>', '-t', 'task_id',
'--reset_dagruns', '--yes']
The above output reflects a way how gcloud decouples command line arguments, dispatching them to kubectl command inheritor. Saying this, I assume that --yes argument for unknown reason was propagated and even more wrongly positioned out the rest of parameters.
Looking for a workaround I was on my way composing relevant kubectl command call to particular Airflow worker Pod, manually dispatching Airflow command line parameters:
kubectl -it exec $(kubectl get po -l run=airflow-worker -o jsonpath='{.items[0].metadata.name}' \
-n $(kubectl get ns| grep composer*| awk '{print $1}')) -n $(kubectl get ns| grep composer*| awk '{print $1}') \
-c airflow-worker -- airflow backfill <DAG> -s <start_date> -e <end_date> -t "task_id" --reset_dagruns
By now, airflow backfill command successes without throwing any error.
To trigger a manual run you can use the trigger_dag parameter:
gcloud composer environments run <COMPOSER_INSTANCE_NAME> --location <LOCATION> trigger_dag -- <DAG_NAME>