I am getting below error while trying to inject istio sidecar container manually to pod.
Kubernetes version v1.21.0
Istio version : 1.8.0
Installation commands:-
kubectl create namespace istio-system
helm install --namespace istio-system istio-base istio/charts/base
helm install --namespace istio-system istiod istio/charts/istio-control/istio-discovery --set global.jwtPolicy=first-party-jwt
In kubectl get events, I can see below error:
Error creating: admission webhook "sidecar-injector.istio.io" denied the request: template: inject:443: function "appendMultusNetwork" not defined
In kube api server logs, below errors are observed:
W0505 02:05:30.750732 1 dispatcher.go:142] rejected by webhook "validation.istio.io": &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"admission webhook \"validation.istio.io\" denied the request: configuration is invalid: gateway must have at least one server", Reason:"", Details:(*v1.StatusDetails)(nil), Code:400}}
Please let me know if any clue on how to resolve this error.
I went over step-by-step installation with official documentation, and could not reproduce your problem.
Here are a few things worth checking:
Did you executed all the commands correctly?
Maybe you run a different version of Istio? You can check by issuing istioctl version command
Maybe you changed something in config files? If you did, what exactly?
Try the latest version of Istio (1.9)
Related
I ran into an error today with kubectl that wasn't too clear. I'm Using aws-iam-authenticator version 0.5.0
_________:~$ kubectl --kubeconfig .kube/config get nodes -n my_nodes
Error in configuration: interactiveMode must be specified for ______ to use exec authentication plugin
Upgrading aws-iam-authenticator to the latest (0.5.9) fixed it.
Every document I found only tells you how to enable/disable a feature while installing a new Istio instance. But I think in a lot of cases, people need to update the Istio configuration.
Accessing External Services, in this instance, it says I need to provide <flags-you-used-to-install-Istio>, but what if I don't know how the instance was installed?
Address auto allocation, in this instance, it doesn't mention a way to update the configuration. Does it imply this feature has to be enabled in a fresh installation?
Why there's no istioctl update command?
The confusion totally makes sense. As at least it would be nice for it to be called out somewhere.
Basically, there is no update command for the same reason as there is no kubectl update command. What istioctl does is generate the YAML output which represents in a declarative way how your application should be running. And then applies it to the cluster and Kubernetes handles it.
So basically istioctl install with the same values will produce the same output and when applied to Kubernetes, if there were no changes, nothing will be updated.
I will rephrase your questions to be more precise, I believe the context is the same:
How do I find Istio installation configuration
Prior to installation, you should have generated the manifest. This can be done with
istioctl manifest generate <flags-you-use-to-install-Istio> > $HOME/istio-manifest.yaml
With this manifest you can inspect what is being installed, and track changes to the manifest over time.
This will also capture any changes to underlying charts (if installed with Helm). Just add -f flag to the command:
istioctl manifest generate -f path/to/manifest.yaml > $HOME/istio-manifest.yaml
If there is no manifest available, you can check IstioOperator CustomResource, but Istio must be installed with operator, for it to be available.
If neither of the above are available, you are out of luck. This is not an optimal situation, but it is what we get.
How do I customize Istio installation
Using IstioOperator
You can pass new configuration, in YAML format, to istioctl install
echo '
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
components:
pilot:
k8s:
resources:
requests:
cpu: 1000m # override from default 500m
memory: 4096Mi # ... default 2048Mi
hpaSpec:
maxReplicas: 10 # ... default 5
minReplicas: 2 # ... default 1
' | istioctl install -f -
The above example adjusts the resources and horizontal pod autoscaling settings for Pilot
Any other configuration (ServiceEntry, DestinationRule, etc.) is deployed like any other resource with kubectl apply.
Why is there no istioctl update command
Because of the #2. Changes to Istio are applied using istioctl install.
If you want to upgrade Istio to a newer version, there are instructions available in the docs.
Good brother, I registered an account to speak. I have been looking for a long time how to update istio, such as the configuration of the global grid. After seeing your post and the answer below, I finally have an answer.
My previous operation was to create two configurations, one is istiod configuration and the other is ingress configuration. When I perform istioctl install -f istiod.yaml, my ingress will be deleted, which bothers me.
Until I saw this post, I got it
I merged the two files into one, the following is my file, it can be updated without deleting my ingress configuration
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
profile: minimal
meshConfig:
accessLogFile: /dev/stdout
accessLogEncoding: TEXT
enableTracing: true
defaultConfig:
tracing:
zipkin:
address: jaeger-collector.istio-system:9411
sampling: 100
components:
ingressGateways:
-name: ingressgateway
namespace: istio-ingress
enabled: true
label:
# Set a unique label for the gateway. This is required to ensure Gateways
# can select this workload
istio: ingressgateway
values:
gateways:
istio-ingressgateway:
# Enable gateway injection
injectionTemplate: gateway
Thank you very much, this post solved my troubles
Istio question, where is pilot-discovery command?
i can found. In istio-1.8.0 directory has no command named pilot-discovery.
pilot-discovery command is command used by pilot, which is part of istiod now.
istiod unifies functionality that Pilot, Galley, Citadel and the sidecar injector previously performed, into a single binary.
You can get your istio pods with
kubectl get pods -n istio-system
Use kubectl exec to get into your istiod container with
kubectl exec -ti <istiod-pod-name> -c discovery -n istio-system -- /bin/bash
Use pilot-discovery commands as mentioned in istio documentation.
e.g.
istio-proxy#istiod-f49cbf7c7-fn5fb:/$ pilot-discovery version
version.BuildInfo{Version:"1.8.0", GitRevision:"c87a4c874df27e37a3e6c25fa3d1ef6279685d23", GolangVersion:"go1.15.5", BuildStatus:"Clean", GitTag:"1.8.0-rc.1"}
In case you are interested in the code: https://github.com/istio/istio/blob/release-1.8/pilot/cmd/pilot-discovery/main.go
I compile the binary by myself.
1 download istio project.
2 make build
3 set golang proxy
4 cd out
You will see the binary.
I have Istio (version 1.16.3) configured with an external Prometheus and I have the Prometheus ServiceMonitor objects configured using the built in Prometheus operator based on the discussion in this issue: https://github.com/istio/istio/issues/21187
For most part this works fine, except that I noticed that the kubernetes-services-secure-monitor and the kubernetes-pods-secure-monitor were also created and this resulted in Prometheus throwing certificate not found errors, as expected because I have not set these up.
"level=error ts=2020-07-06T03:43:33.464Z caller=manager.go:188 component="scrape manager" msg="error creating new scrape pool" err="error creating HTTP client: unable to load specified CA cert /etc/prometheus/secrets/istio.prometheus/root-cert.pem: open /etc/prometheus/secrets/istio.prometheus/root-cert.pem: no such file or directory" scrape_pool=istio-system/kubernetes-pods-secure-monitor/0
I also noticed that the service monitor creation can be disabled by using the Values.prometheus.provisionPrometheusCert flag as per this:
istio/manifests/charts/istio-telemetry/prometheusOperator/templates/servicemonitors.yaml
{{- if .Values.prometheus.provisionPrometheusCert }}
However, re-applying the config using `istioctl install did not delete those service monitors.
Does istioctl install command not delete/prune existing resources?
Here is my full configuration:
apiVersion: install.istio.io/v1alpha1
kind: IstioControlPlane
metadata:
namespace: istio-system
name: istio-controlplane
labels:
istio-injection: enabled
spec:
profile: default
addonComponents:
prometheus:
enabled: false
prometheusOperator:
enabled: true
grafana:
enabled: false
kiali:
enabled: true
namespace: staging
tracing:
enabled: false
values:
global:
proxy:
logLevel: warning
mountMtlsCerts: false
prometheusNamespace: monitoring
tracer:
zipkin:
address: jaeger-collector.staging:9411
prometheusOperator:
createPrometheusResource: false
prometheus:
security:
enabled: false
provisionPrometheusCert: false
Two separate concerns: Upgrade to a new version of Istio and updates to the config.
Upgrade
As far as I know there we´re a lot of issues when upgrading istio from older versions to 1.4,1.5,1.6, but recently when istioctl upgrade came up you shouldn´t be worried about upgrading your cluster.
The istioctl upgrade command performs an upgrade of Istio. Before performing the upgrade, it checks that the Istio installation meets the upgrade eligibility criteria. Also, it alerts the user if it detects any changes in the profile default values between Istio versions.
Additionally Istio 1.6 will support a new upgrade model to safely canary-deploy new versions of Istio. In this new model, proxies will associate with a specific control plane that they use. This allows a new version to deploy to the cluster with less risk - no proxies connect to the new version until the user explicitly chooses to. This allows gradually migrating workloads to the new control plane, while monitoring changes using Istio telemetry to investigate any issues
Related documentation about that is here and here.
Update
As I mentioned in comments, the 2 things I found which might help are
istioctl operator logs
If something with your update goes wrong then it will appear in istio operator logs, and the update will fail.
You can observe the changes that the controller makes in the cluster in response to IstioOperator CR updates by checking the operator controller logs:
$ kubectl logs -f -n istio-operator $(kubectl get pods -n istio-operator -lname=istio-operator -o jsonpath='{.items[0].metadata.name}')
istioctl verify install
Verify a successful installation
You can check if the Istio installation succeeded using the verify-install command which compares the installation on your cluster to a manifest you specify.
If you didn’t generate your manifest prior to deployment, run the following command to generate it now:
$ istioctl manifest generate <your original installation options> > $HOME/generated-manifest.yaml
Then run the following verify-install command to see if the installation was successful:
$ istioctl verify-install -f $HOME/generated-manifest.yaml
Hope you find this useful.
I have followed the steps mentioned here: https://github.com/wso2/kubernetes-apim/tree/master/helm/pattern-1. I am encountering an issue that when I execute:
helm install --name wso2am ~/git/src/github.com/wso2/kubernetes-apim/helm/pattern-1/apim-with-analytics
I receive the following error:
Error: release wso2am failed: configmaps "apim-conf" already exists
This happens on the first time of running the helm install command.
I've deleted the configmaps (kubectl delete configmaps apim-conf) and the release (helm del --purge wso2am), and when I try it again I get the same error.
Any assistance on how to get past this issue would be appreciated.
The issue with this is that there was a second copy of the apim-conf.yaml but named apim-conf.yaml_old. This caused helm to attempt to install apim-conf twice. This is resolved.
You can check the configmaps in the wso2 namespace by using the following command.
kubectl get configmaps -n wso2
Then you can remove the configmap apim-conf as follows.
kubectl delete configmap apim-conf -n wso2