How can I disable Istio to send spans to zipkin?
If I'm not wrong this is not a mixer adapter right? It is something directly done from the pilot.
How can I disable it?
According to the section titled Cleanup in the Istio docs:
kubectl delete -f install/kubernetes/addons/zipkin.yaml
Related
Refer to the documentation of ElastiCache for Redis -> Getting Started -> Step 4: Connect to the cluster's mode:
https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/GettingStarted.ConnectToCacheNode.html
Under the section Connecting to a cluster mode disabled unencrypted-cluster, the docs ask you to run the following command:
$ src/redis-cli -h cluster-endpoint -c -p port number
Then, it gives an example of some redis commands:
set x Hi
-> Redirected to slot [16287] located at 172.31.28.122:6379
OK
set y Hello
OK
get y
"Hello"
set z Bye
-> Redirected to slot [8157] located at 172.31.9.201:6379
OK
get z
"Bye"
get x
-> Redirected to slot [16287] located at 172.31.28.122:6379
"Hi"
What I don't understand is: when we're talking about a "cluster mode disabled" ElastiCache cluster, it means that there's only one shard, as stated in the docs: Components and Features.
If so, how is it that the requests sent in the example above got redirected to other nodes? If there's only one shard, it means that all the data is written in the primary node. The primary node may be replicated to replica nodes, but that's another thing..
Is it a mistake in the docs or am I missing something?
A Redis cluster is a logical grouping of one or more ElastiCache for Redis shards. In the example it talks about interacting with a "cluster mode disabled" redis, however replication is turned on as you see in the screenshot there are 1 primary node and 2 replicas.
Initially I thought the redirections are due to replica, but I tested on my Redis cluster mode disabled with same replication setup and I do not get the ASK and MOVED redirections. I also tested this against the read only directly. (I connected with --verbose mode and -c)
I was not able to generate the redirection events you see in the documentation.
Therefore I can say with strong certainty that the author of the document has pasted in output from a cluster mode enabled Redis cluster, which is possibly causing you the confusion.
You are right. The title for this section in the documentation is incorrect and it describes how to connect to a cluster mode enabled unecrypted cluster.
You can leave feedback on documentation inaccuracies by clicking on the feedback icon in the navigation pane on the upper right part of the screen or clicking on "Provide feedback" at the bottom left in the footer of the page.
Using Istio 1.2.10-gke.3 on gke
curl -v -HHost:user.domain.com --resolve user.domain.com:443:$gatewayIP https://user.domain.com/auth -v -k
return a 503 after tls verification
< date: Tue, 19 May 2020 20:50:29 GMT
< server: istio-envoy
Now I want to track the request and identify the first point of failure by tracing the logs of the components involved and resolve the issue
The logs of the istio-ingressgateway pod show nothing. After getting a shell on the pod, I do a top and see an envoy process running, however I don't see any logs for the envoy in /var/log/
What am I missing? Am I looking at the wrong place? Or do I need to read the code of the framework to be able to use it?
I need to find out which link in the request processing chain broke first and the reason so that the same can be fixed
Here are some useful links to istio documentation for debugging error 503:
Istio documentation for envoy access logs
Istio documentation for Connectivity troubleshooting.
Useful envoy debugging tool istioctl.
$ istioctl proxy-status
Also one rare case where error 503 could be present.
This error could also be present if envoy sidecar proxy has issues or did not properly inject itself to deployment pod. Or when there are mTLS miss-configurations.
Hope it helps.
I am using istio with version 1.3.5. Is there any configuration to be set to allow istio-proxy to log traceId? I am using jaeger tracing (wit zipkin protocol) being enabled. There is one thing I want to accomplish by having traceId logging:
- log correlation in multiple services upstream. Basically I can filter all logs by certain traceId.
According to envoy proxy documentation for envoy v1.12.0 used by istio 1.3:
Trace context propagation
Envoy provides the capability for reporting tracing information regarding communications between services in the mesh. However, to be able to correlate the pieces of tracing information generated by the various proxies within a call flow, the services must propagate certain trace context between the inbound and outbound requests.
Whichever tracing provider is being used, the service should propagate the x-request-id to enable logging across the invoked services to be correlated.
The tracing providers also require additional context, to enable the parent/child relationships between the spans (logical units of work) to be understood. This can be achieved by using the LightStep (via OpenTracing API) or Zipkin tracer directly within the service itself, to extract the trace context from the inbound request and inject it into any subsequent outbound requests. This approach would also enable the service to create additional spans, describing work being done internally within the service, that may be useful when examining the end-to-end trace.
Alternatively the trace context can be manually propagated by the service:
When using the LightStep tracer, Envoy relies on the service to propagate the
x-ot-span-context
HTTP header while sending HTTP requests to other services.
When using the Zipkin tracer, Envoy relies on the service to propagate the B3 HTTP headers (
x-b3-traceid,
x-b3-spanid,
x-b3-parentspanid,
x-b3-sampled,
and
x-b3-flags).
The
x-b3-sampled
header can also be supplied by an external client to either enable or
disable tracing for a particular request. In addition, the single
b3
header propagation format is supported, which is a more compressed
format.
When using the Datadog tracer, Envoy relies on the service to propagate the Datadog-specific HTTP headers (
x-datadog-trace-id,
x-datadog-parent-id,
x-datadog-sampling-priority).
TLDR: traceId headers need to be manually added to B3 HTTP headers.
Additional information: https://github.com/openzipkin/b3-propagation
When using helm upgrade --install I'm every so often running into timeouts. The error I get is:
UPGRADE FAILED
Error: timed out waiting for the condition
ROLLING BACK
If I look in the GKE cluster logs on GCP, I see that when this happens its because this step takes an unusually long time to execute:
Killing container with id docker://{container-name}:Need to kill Pod
I've seen it range from a few seconds to 9 minutes. If I go into the log message's metadata to find the specific container and look at its logs, there is nothing in them suggesting a difference between it and a quickly killed container.
Any suggestions on how to keep troubleshooting this?
You could refer this troubleshooting guide for general issues connected with Google Kubernetes Engine.
As mentioned there, you may need to use the 'Troubleshooting Application' guide for further debugging the application pods or its controller objects.
I am assuming that you checked the logs(1) of the container that resides in the respective pod OR described(2)( look at the reason for termination) it using the below commands. If not, you can try these as well to get more valuable information.
1. kubectl logs POD_NAME -c CONTAINER_NAME -p
2. kubectl describe pods POD_NAME
Note: I saw a similar discussion thread reported in github.com about helm upgrade failure. You can have a look over there as well.
I have a setup using Kubernetes and Istio where we run a set of services. Each of our services have an istio-sidecar and a REST-api. What we would like is that whenever a service within our setup calls another that the called service knows what service is the caller (Preferably through a header).
Looking at the example image from bookinfo:
bookinfo-image (Link due to <10 reputation)
This would mean that in the source code for the ratings service I would like to be able to, for example, read a header telling me the request came from e.g. Reviews-v2.
My intuition tells me that I should be able to handle this in the istio sidecars, but I fail to realise exactly how.
Until now I have looked at especially envoy filters in the hope that they could help me. I see that for the envoy filters I would be able to set a header, but what I don't see is how I would get the information about what service made the call in order to set it in the header.
Envoy automatically sets the X-Forwarded-Client-Cert header, which contains the SPIFFE ID of the caller. SPIFFE ID in Istio is a URI in the form spiffe://cluster.local/ns/<namespace>/sa/<service account>. Practically, it designates the Kubernetes Service Account of the caller. You may want to test it by using the Istio httpbin sample and sending a request to httpbin:8000/headers
I ended up finding another solution by using a "rule". If we made sure that policy enforcing is enabled and then added the rule:
apiVersion: config.istio.io/v1alpha2
kind: rule
metadata:
name: header-rule
namespace: istio-system
spec:
actions: []
requestHeaderOperations:
- name: serviceid
values:
- source.labels["app"]
operation: REPLACE
We achieved what we were attempting to do.