I have Istio (version 1.16.3) configured with an external Prometheus and I have the Prometheus ServiceMonitor objects configured using the built in Prometheus operator based on the discussion in this issue: https://github.com/istio/istio/issues/21187
For most part this works fine, except that I noticed that the kubernetes-services-secure-monitor and the kubernetes-pods-secure-monitor were also created and this resulted in Prometheus throwing certificate not found errors, as expected because I have not set these up.
"level=error ts=2020-07-06T03:43:33.464Z caller=manager.go:188 component="scrape manager" msg="error creating new scrape pool" err="error creating HTTP client: unable to load specified CA cert /etc/prometheus/secrets/istio.prometheus/root-cert.pem: open /etc/prometheus/secrets/istio.prometheus/root-cert.pem: no such file or directory" scrape_pool=istio-system/kubernetes-pods-secure-monitor/0
I also noticed that the service monitor creation can be disabled by using the Values.prometheus.provisionPrometheusCert flag as per this:
istio/manifests/charts/istio-telemetry/prometheusOperator/templates/servicemonitors.yaml
{{- if .Values.prometheus.provisionPrometheusCert }}
However, re-applying the config using `istioctl install did not delete those service monitors.
Does istioctl install command not delete/prune existing resources?
Here is my full configuration:
apiVersion: install.istio.io/v1alpha1
kind: IstioControlPlane
metadata:
namespace: istio-system
name: istio-controlplane
labels:
istio-injection: enabled
spec:
profile: default
addonComponents:
prometheus:
enabled: false
prometheusOperator:
enabled: true
grafana:
enabled: false
kiali:
enabled: true
namespace: staging
tracing:
enabled: false
values:
global:
proxy:
logLevel: warning
mountMtlsCerts: false
prometheusNamespace: monitoring
tracer:
zipkin:
address: jaeger-collector.staging:9411
prometheusOperator:
createPrometheusResource: false
prometheus:
security:
enabled: false
provisionPrometheusCert: false
Two separate concerns: Upgrade to a new version of Istio and updates to the config.
Upgrade
As far as I know there we´re a lot of issues when upgrading istio from older versions to 1.4,1.5,1.6, but recently when istioctl upgrade came up you shouldn´t be worried about upgrading your cluster.
The istioctl upgrade command performs an upgrade of Istio. Before performing the upgrade, it checks that the Istio installation meets the upgrade eligibility criteria. Also, it alerts the user if it detects any changes in the profile default values between Istio versions.
Additionally Istio 1.6 will support a new upgrade model to safely canary-deploy new versions of Istio. In this new model, proxies will associate with a specific control plane that they use. This allows a new version to deploy to the cluster with less risk - no proxies connect to the new version until the user explicitly chooses to. This allows gradually migrating workloads to the new control plane, while monitoring changes using Istio telemetry to investigate any issues
Related documentation about that is here and here.
Update
As I mentioned in comments, the 2 things I found which might help are
istioctl operator logs
If something with your update goes wrong then it will appear in istio operator logs, and the update will fail.
You can observe the changes that the controller makes in the cluster in response to IstioOperator CR updates by checking the operator controller logs:
$ kubectl logs -f -n istio-operator $(kubectl get pods -n istio-operator -lname=istio-operator -o jsonpath='{.items[0].metadata.name}')
istioctl verify install
Verify a successful installation
You can check if the Istio installation succeeded using the verify-install command which compares the installation on your cluster to a manifest you specify.
If you didn’t generate your manifest prior to deployment, run the following command to generate it now:
$ istioctl manifest generate <your original installation options> > $HOME/generated-manifest.yaml
Then run the following verify-install command to see if the installation was successful:
$ istioctl verify-install -f $HOME/generated-manifest.yaml
Hope you find this useful.
Related
I'm trying to push a helm chart to Google Cloud OCI registry (Artifact Registry) but I get forbidden error:
helm push testapp-1.0.0.tgz oci://europe-north1-docker.pkg.dev/project-id/my-artifact-registry/
Error: failed to authorize: failed to fetch anonymous token:
unexpected status: 403 Forbidden
It seems that I'm authenticated ok since when I do try to push it but without "oci://" it works fine:
helm chart push europe-north1-docker.pkg.dev/project-id/my-artifact-registry/charts/testapp:1.0.0
The push refers to repository [europe-north1-docker.pkg.dev/..]
ref: europe-north1-docker.pkg.dev/...
digest: 2757354aef8af2db48261d52c17c0df35a99d6fccaf016b0e67e167c391b69c7
size:3.9 KiB
name: testapp
version: 1.0.0
1.0.0: pushed to remote (1 layer, 3.9 KiB total)
I logged in to the helm registry using service account json key, using below command:
helm registry login -u _json_key_base64 --password <base_64_key> https://europe-north1-docker.pkg.dev
and this service-account has below roles:
roles/artifactregistry.admin
roles/artifactregistry.repoAdmin
roles/artifactregistry.writer
roles/container.developer
roles/storage.admin
roles/storage.objectViewer
Is there any specific permission needs to be enabled in GCP to use "OCI" protocol?
or any service need to be enabled?
or any different authentication required?
I followed the instructions here but with no success
its funny, but this is not the first time it happens to me... once I submit the question to Stackoverflow, something hits me and I'm able to find the problem with my issue!!
Anyway, the problem is basically with the authentication, where the URL to login to should be in the format of:
https://LOCATION-docker.pkg.dev/PROJECT/REPOSITORY
like this:
helm registry login -u _json_key_base64 --password <base_64_key> \
https://europe-north1-docker.pkg.dev/project-id/my-artifact-registry
I faced the same issue but using Cloudbuild.
I am glad if this snippet of code can help someone.
steps:
- name: 'alpine/helm:3.9.1'
id: 'helm package'
args: ['package', '.']
- name: 'alpine/helm:3.9.1'
id: 'helm push'
env:
- 'HELM_REGISTRY_CONFIG=../builder/home/.docker/config.json'
entrypoint: 'sh'
args:
- '-c'
- |
helm push --debug mylibchart-*.tgz oci://europe-west3-docker.pkg.dev/$PROJECT_ID/helm-registry
Basically in the step where we want to push our *.tgz., we need to set the env HELM_REGISTRY_CONFIG equal to the default path of the docker config.json.
This is kinda stupid but I was transitioning from container registry to the artifact registry and I forgot to give my service account permissions for the artifact registry which resulted in this exact error.
Every document I found only tells you how to enable/disable a feature while installing a new Istio instance. But I think in a lot of cases, people need to update the Istio configuration.
Accessing External Services, in this instance, it says I need to provide <flags-you-used-to-install-Istio>, but what if I don't know how the instance was installed?
Address auto allocation, in this instance, it doesn't mention a way to update the configuration. Does it imply this feature has to be enabled in a fresh installation?
Why there's no istioctl update command?
The confusion totally makes sense. As at least it would be nice for it to be called out somewhere.
Basically, there is no update command for the same reason as there is no kubectl update command. What istioctl does is generate the YAML output which represents in a declarative way how your application should be running. And then applies it to the cluster and Kubernetes handles it.
So basically istioctl install with the same values will produce the same output and when applied to Kubernetes, if there were no changes, nothing will be updated.
I will rephrase your questions to be more precise, I believe the context is the same:
How do I find Istio installation configuration
Prior to installation, you should have generated the manifest. This can be done with
istioctl manifest generate <flags-you-use-to-install-Istio> > $HOME/istio-manifest.yaml
With this manifest you can inspect what is being installed, and track changes to the manifest over time.
This will also capture any changes to underlying charts (if installed with Helm). Just add -f flag to the command:
istioctl manifest generate -f path/to/manifest.yaml > $HOME/istio-manifest.yaml
If there is no manifest available, you can check IstioOperator CustomResource, but Istio must be installed with operator, for it to be available.
If neither of the above are available, you are out of luck. This is not an optimal situation, but it is what we get.
How do I customize Istio installation
Using IstioOperator
You can pass new configuration, in YAML format, to istioctl install
echo '
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
components:
pilot:
k8s:
resources:
requests:
cpu: 1000m # override from default 500m
memory: 4096Mi # ... default 2048Mi
hpaSpec:
maxReplicas: 10 # ... default 5
minReplicas: 2 # ... default 1
' | istioctl install -f -
The above example adjusts the resources and horizontal pod autoscaling settings for Pilot
Any other configuration (ServiceEntry, DestinationRule, etc.) is deployed like any other resource with kubectl apply.
Why is there no istioctl update command
Because of the #2. Changes to Istio are applied using istioctl install.
If you want to upgrade Istio to a newer version, there are instructions available in the docs.
Good brother, I registered an account to speak. I have been looking for a long time how to update istio, such as the configuration of the global grid. After seeing your post and the answer below, I finally have an answer.
My previous operation was to create two configurations, one is istiod configuration and the other is ingress configuration. When I perform istioctl install -f istiod.yaml, my ingress will be deleted, which bothers me.
Until I saw this post, I got it
I merged the two files into one, the following is my file, it can be updated without deleting my ingress configuration
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
profile: minimal
meshConfig:
accessLogFile: /dev/stdout
accessLogEncoding: TEXT
enableTracing: true
defaultConfig:
tracing:
zipkin:
address: jaeger-collector.istio-system:9411
sampling: 100
components:
ingressGateways:
-name: ingressgateway
namespace: istio-ingress
enabled: true
label:
# Set a unique label for the gateway. This is required to ensure Gateways
# can select this workload
istio: ingressgateway
values:
gateways:
istio-ingressgateway:
# Enable gateway injection
injectionTemplate: gateway
Thank you very much, this post solved my troubles
I am getting below error while trying to inject istio sidecar container manually to pod.
Kubernetes version v1.21.0
Istio version : 1.8.0
Installation commands:-
kubectl create namespace istio-system
helm install --namespace istio-system istio-base istio/charts/base
helm install --namespace istio-system istiod istio/charts/istio-control/istio-discovery --set global.jwtPolicy=first-party-jwt
In kubectl get events, I can see below error:
Error creating: admission webhook "sidecar-injector.istio.io" denied the request: template: inject:443: function "appendMultusNetwork" not defined
In kube api server logs, below errors are observed:
W0505 02:05:30.750732 1 dispatcher.go:142] rejected by webhook "validation.istio.io": &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"admission webhook \"validation.istio.io\" denied the request: configuration is invalid: gateway must have at least one server", Reason:"", Details:(*v1.StatusDetails)(nil), Code:400}}
Please let me know if any clue on how to resolve this error.
I went over step-by-step installation with official documentation, and could not reproduce your problem.
Here are a few things worth checking:
Did you executed all the commands correctly?
Maybe you run a different version of Istio? You can check by issuing istioctl version command
Maybe you changed something in config files? If you did, what exactly?
Try the latest version of Istio (1.9)
I am doing chaos testing on all istio core components, pilot, mixer, citadel, and default objects/resources. It am manually deleting the components and document the behavior, which will help when it actually breaks in production.
I have deleted ingress-gateway service. It also deleted egress pods, which i didn't expect.
Since I am going to delete all the default objects one by one, Is there a better or more cleaner way to recreate core objects? For example, how would I recreate ingress and egress services?
In my opinion the best way to re-create lost/deleted components of Istio, is to do it by helm (package manager for Kubernetes).
helm upgrade <your-release-name> <repo-name>/<chart-name> --reuse-values --force
You can also keep track of changes of your istio installation (aka Istio release), and simply restore to its last working version using following commands:
helm history <release_name>
helm rollback --force [RELEASE] [REVISION]
Eventually you can always reach out back to Istio installation directory, and re-apply piece of manifest corresponding to deleted object, for example for istio v1.1.1, the istio-ingressgateway Service object is declared inside 'istio-1.1.1/install/kubernetes/istio-demo.yaml'. Additionally these manifest files can be generated by helm template command directly from source code repository.
Using CloudFoundry, is there a way to define a custom DNS search so host names are resolved?
We are using an Ubuntu stem cell and need to reach out to an external server. Using a FQDN, this works, but would prefer to use the host name only. Generally, this is in resolve.conf on a Unix/Linux box but wasn't sure how to define this in CloudFoundry.
One option here would be a Bosh add-on. A Bosh add-on will run on all VMs managed by your Bosh Director. Here are some example add-ons.
You'll want to use the os-conf-release for your add-on. It has a job called search_domain which lets you set the search domain on all of the Bosh deployed VMs.
I haven't tested it, but I believe a manifest like this should work.
releases:
- name: os-conf
version: 12
addons:
- name: search-domain
jobs:
- name: search_domain
release: os-conf
properties:
search_domain: my.domain.com
That would add my.domain.com to the list of search domains in resolv.conf. Hope that helps!