I am using Istio with tracing. I have two REST services A1 which calls A2.
The A1 service will forward the headers received from the client to REST service A2.
However, I am not able to see the tracing in my Jaeger UI. I am sending some random values in the header using curl to my service A1:
curl -H "end-user: april1101" -H "x-request-id: april1a01"
-H "x-b3-traceid: 1" -H "x-b3-spanid: 2" -H "x-b3-parentspanid: 3"
-H "x-b3-sampled: 4dfgdgsdfgdfg" -H "x-b3-flags: 5"
-H "x-ot-span-context: 6" http://trace-rest-2-service:9005/kafka/headerpass?arg=yo
Another header parameters I tried also is:
curl -H "end-user: 34723b86-fe18-4e36-ac56-0ad12bf3d136"
-H "x-request-id: 4aba321a-4e60-4b42-b0ca-654c2400d485"
-H "x-b3-traceid: 308eec41-93bd-4584-8efe-2b968284c41e"
-H "x-b3-spanid: 8395e40e-6a79-4a34-b444-18d401f597cb"
-H "x-b3-parentspanid: c55f6cf0-3547-4202-b51e-9eaaa2136a73"
-H "x-b3-sampled: 1948dd81-9e70-4694-95f9-018e3abdbe29"
-H "x-b3-flags: acac5baf-42d2-45d8-84d0-645070a76f2e"
-H "x-ot-span-context: 6ee27e27-45c4-47de-86e4-1d4227bcf638"
http://trace-rest-2-service:9005/kafka/headerpass?arg=832amApril_1
However, I don't see any trace on Jaeger ui. If just call a service A3 without passing any headers then I see that traced in the JaegerUI.
thanks
There are a few different reasons why You could be experiencing this issue.
From istio documentation:
WHY ARE MY REQUESTS NOT BEING TRACED?
Since Istio 1.0.3, the sampling rate for tracing has been reduced to 1% in the default configuration profile. This means that only 1 out of 100 trace instances captured by Istio will be reported to the tracing backend. The sampling rate in the demo profile is still set to 100%. See this section for more information on how to set the sampling rate.
If you still do not see any trace data, please confirm that your ports conform to the Istio port naming conventions and that the appropriate container port is exposed (via pod spec, for example) to enable traffic capture by the sidecar proxy (Envoy).
If you only see trace data associated with the egress proxy, but not the ingress proxy, it may still be related to the Istio port naming conventions. Starting with Istio 1.3 the protocol for outbound traffic is automatically detected.
Hope it helps.
Related
I have a Kubernetes cluster configured which builds perfectly when running via Docker Desktop, including invoking with successful endorsement via all three Chaincode containers in the network.
On the remote side, I'm using AWS EKS to deploy my nodes and I have more recently followed this guide on deploying a production ready peer. I already had EFS set up and in use as a k8s Persistent Volume, and this is populated each time I spool up a network with all the config. This means all the crypto materials, connection profiles, etc are mounted to the relevant containers and as per best practice the reference to these TLS certs is in this directory.
This all works as expected... my admin pods can communicate with my peers, the orderers connect, etcetera. I'm able to fully install chaincode, approve it and commit it to all three of my peers successfully.
When it comes to invoking the chaincode, my org1 container always succeeds, and successfully communicates with the peer in its organization.
I'm aware of the core.yaml setting localMspId and this is being overridden by the environment variable CORE_PEER_LOCALMSPID for each set of peers, such that in my org1 peer the value is Org1MSP, in org2 it's Org2MSP, etc.
When running peer chaincode invoke, the first container (org1) succeeds very quickly, the other two try to contact their peers and hang for the timeout period set in the default gRPC settings (110000ms wait). I also have set the env var of CORE_PEER_ADDRESS_AUTODETECT: "true" on my peer in order to ensure it doesn't try to resolve using the hostnames like peer0.org1 (this clearly works for org1 but not the other two).
The environment variables set for TLS in each of the containers corresponds to the contents of the ones I am passing (in correct order) with my invoke command:
peer chaincode invoke --ctor '${CC_INIT_ARGS}' --channelID ${CHANNEL_ID} --name ${CC_NAME} --cafile \$ORDERER_TLS_ROOTCERT_FILE \
--tls true -o orderer.${ORG}:7050 \
--peerAddresses peer0.org1:7051 \
--peerAddresses peer0.org2:7051 \
--peerAddresses peer0.org3:7051 \
--tlsRootCertFiles /etc/hyperledger/fabric-peer/client-root-tlscas/tlsca.org1-cert.pem \
--tlsRootCertFiles /etc/hyperledger/fabric-peer/client-root-tlscas/tlsca.org2-cert.pem \
--tlsRootCertFiles /etc/hyperledger/fabric-peer/client-root-tlscas/tlsca.org3-cert.pem >&invoke-log.txt
cat invoke-log.txt
That command is executed inside my container, and as mentioned, I have manually confirmed by inspecting all three containers, then cating the contents of the files, versus doing the same with the above paths, and they match exactly. That is to say the contents of /etc/hyperledger/fabric-peer/client-root-tlscas/tlsca.org1-cert.pem are equivalent to the CORE_PEER_TLS_ROOTCERT_FILE setting in org1, and so on per organization.
Example org1 chaincode container logs:
2022-02-23T13:47:07.255Z debug [c-api:lib/handler.js] [allorgs-5e707801] Calling chaincode Invoke(), response status: 200
2022-02-23T13:47:07.256Z info [c-api:lib/handler.js] [allorgs-5e707801] Calling chaincode Invoke() succeeded. Sending COMPLETED message back to peer
For org2 and org3 containers, once it finally finishes the timeout, it outputs:
2022-02-23T12:24:05.045Z error [c-api:lib/handler.js] Chat stream with peer - on error: %j "Error: 14 UNAVAILABLE: No connection established\n at Object.callErrorFromStatus (/usr/local/src/node_modules/#grpc/grpc-js/build/src/call.js:31:26)\n at Object.onReceiveStatus (/usr/local/src/node_modules/#grpc/grpc-js/build/src/client.js:391:49)\n at Object.onReceiveStatus (/usr/local/src/node_modules/#grpc/grpc-js/build/src/client-interceptors.js:328:181)\n at /usr/local/src/node_modules/#grpc/grpc-js/build/src/call-stream.js:182:78\n at processTicksAndRejections (internal/process/task_queues.js:79:11)"
2022-02-23T12:24:05.045Z debug [c-api:lib/handler.js] Chat stream ending
I have also enabled DEBUG logs on everything and I'm gleaning nothing useful from it. Any help or suggestions would be greatly appreciated!
The three peers share the same port. Is that even possible?
Also, when running invoke from the command line, I would normally use the following pattern, repeated for each peer.
--peerAddresses localhost:6051 --tlsRootCertFiles <path to peer on port 6051>
--peerAddresses localhost:6052 --tlsRootCertFiles <path to peer on port 6052>
not the three peers followed by the three TLS cert file paths.
Using Istio 1.2.10-gke.3 on gke
curl -v -HHost:user.domain.com --resolve user.domain.com:443:$gatewayIP https://user.domain.com/auth -v -k
return a 503 after tls verification
< date: Tue, 19 May 2020 20:50:29 GMT
< server: istio-envoy
Now I want to track the request and identify the first point of failure by tracing the logs of the components involved and resolve the issue
The logs of the istio-ingressgateway pod show nothing. After getting a shell on the pod, I do a top and see an envoy process running, however I don't see any logs for the envoy in /var/log/
What am I missing? Am I looking at the wrong place? Or do I need to read the code of the framework to be able to use it?
I need to find out which link in the request processing chain broke first and the reason so that the same can be fixed
Here are some useful links to istio documentation for debugging error 503:
Istio documentation for envoy access logs
Istio documentation for Connectivity troubleshooting.
Useful envoy debugging tool istioctl.
$ istioctl proxy-status
Also one rare case where error 503 could be present.
This error could also be present if envoy sidecar proxy has issues or did not properly inject itself to deployment pod. Or when there are mTLS miss-configurations.
Hope it helps.
I am using istio with version 1.3.5. Is there any configuration to be set to allow istio-proxy to log traceId? I am using jaeger tracing (wit zipkin protocol) being enabled. There is one thing I want to accomplish by having traceId logging:
- log correlation in multiple services upstream. Basically I can filter all logs by certain traceId.
According to envoy proxy documentation for envoy v1.12.0 used by istio 1.3:
Trace context propagation
Envoy provides the capability for reporting tracing information regarding communications between services in the mesh. However, to be able to correlate the pieces of tracing information generated by the various proxies within a call flow, the services must propagate certain trace context between the inbound and outbound requests.
Whichever tracing provider is being used, the service should propagate the x-request-id to enable logging across the invoked services to be correlated.
The tracing providers also require additional context, to enable the parent/child relationships between the spans (logical units of work) to be understood. This can be achieved by using the LightStep (via OpenTracing API) or Zipkin tracer directly within the service itself, to extract the trace context from the inbound request and inject it into any subsequent outbound requests. This approach would also enable the service to create additional spans, describing work being done internally within the service, that may be useful when examining the end-to-end trace.
Alternatively the trace context can be manually propagated by the service:
When using the LightStep tracer, Envoy relies on the service to propagate the
x-ot-span-context
HTTP header while sending HTTP requests to other services.
When using the Zipkin tracer, Envoy relies on the service to propagate the B3 HTTP headers (
x-b3-traceid,
x-b3-spanid,
x-b3-parentspanid,
x-b3-sampled,
and
x-b3-flags).
The
x-b3-sampled
header can also be supplied by an external client to either enable or
disable tracing for a particular request. In addition, the single
b3
header propagation format is supported, which is a more compressed
format.
When using the Datadog tracer, Envoy relies on the service to propagate the Datadog-specific HTTP headers (
x-datadog-trace-id,
x-datadog-parent-id,
x-datadog-sampling-priority).
TLDR: traceId headers need to be manually added to B3 HTTP headers.
Additional information: https://github.com/openzipkin/b3-propagation
I have .Net server running in Google Kubernetes Engine. It is configured to use gRPC through Google Cloud Endpoints. Now I need to schedule task to call my gRPC method once per day.
The first thing I tried was to use Google Cloud Scheduler to call http methods directly. For that I have:
Set up HTTP to gRPC transcoding on my server to call my gRPC method through http.
Created and enabled SSL certificate as described here.
Created service account in IAM & admin console with Service Account Token Creator and Service Account User permissions.
Created Cloud Scheduler job with my url and Auth header as OIDC token and created above service account.
Deployed Google Cloud Endpoints configuration with following parameters (not only them):
authentication:
providers:
- id: google_service_account
issuer: MY_SERVICE_ACCOUNT_EMAIL
jwks_uri: https://www.googleapis.com/robot/v1/metadata/x509/MY_SERVICE_ACCOUNT_EMAIL
rules:
- selector: "*"
requirements:
- provider_id: google_service_account
After that when I run scheduler job it returns result "Failed". In logs it writes ERROR with status UNKNOWN.
The second thing I tried was to use Google Cloud Scheduler to publish message in Pub Sub topic with my server as subscriber.
Unsuccesfully too because I can't verify ownership of Google Cloud Endpoints domain. I asked regarding question here: How to verify ownership of Google Cloud Endpoints service URL?
Now the question: what is the best way to schedule task that would call gRPC method assuming following environment:
.Net server running on GKE
gRPC
Automated periodical call of that task (I can call manually but it's meaningless)
So you were able to make a HTTP call manually, but not automatically by Google Cloud Scheduler, is that correct?
If so, check to see if the request reach the Cloud Endpoint Proxy in the cloud console Endpoint Logging, it may give you some hints.
Distributed scheduler
more details refer sourcedcode Distributed scheduler
This application can be run on different hosts and offers functionality to
schedule execution of arbitrary command at particular time or periodically.
There are two ways to communicate with application: gRPC and REST. Remote
interfaces are
specified in dsched.proto file
Corresponding REST API could be also found over there in form of API
annotations. We also provide generated Swagger files.
To specify task execution timing, we are using notation adopted by cron.
Scheduled tasks are stored in file and loaded automatically during startup.
Building
Install gRPC
Install gRPC gateway
To parse crontab statements and schedule task execution, we are using gopkg.in/robfig/cron.v2 library.
So it should be installed also: go get -u gopkg.in/robfig/cron.v2. Documentation could be found here
Get dsched package: go get
-u gitlab.com/andreynech/dsched
Now it is possible to run standard go build command in dscheduler and
gateway directories to generate binaries for scheduler and REST/JSON API
gateway. It might be also helpful to examine our
CI configuration file to see how we
set up building environment.
Running
All the scheduling functionality is implemented by dscheduler executable. So
it could be run on system startup or on demand. As described by dscheduler --help,
there are two command line parameters:
-i string - File name to store task list (default "/var/run/dscheduler.db")
-p string - Endpoint to listen (default ":50051")
If there is a need to offer REST/JSON API, gateway application located in
gateway directory should be run. It could reside on the same host as
dscheduler, but typically it would be other host which is accessible over
HTTP from outside and at the same way can talk to dscheduler running in
internal network. This setup was also the reason to split scheduler and
gateway in two executables. gateway is mostly generated application and
supports several command-line parameters described by running gateway --help.
Important parameter is -sched_endpoint string which is endpoint of Scheduler
service (default "localhost:50051"). It specifies the host name and port
where dscheduler is listening for requests.
Scheduling tasks (testing)
There are three ways to control scheduler server:
Using Go client implemented in cli/ directory
Using Python client implemented in py_cli directory
Using REST/JSON API gateway and curl
Go and Python clients have similar set of command line parameters.
$ ./cli --help
Usage of cli:
-a string
The command to execute at time specified by -c parameter
-c string
Statement in crontab format describes when to execute the command
-e string
Host:port to connect (default "localhost:50051")
-l List scheduled tasks
-p Purge all scheduled tasks
-r int
Remove the task with specified id from schedule
-s Schedule task. -c and -a arguments are required in this case
They are using gRPC protocol to talk to scheduler server. Here are several
example invocations:
$ ./cli -l list currently scheduled tasks
$ ./cli -s -c "#every 0h00m10s" -a "df" schedule df command for
execution every 10 seconds
$ ./cli -s -c "0 30 * * * *" -a "ls -l" schedule ls -l command to
run every 30 minutes
$ ./cli -r 3 remove task with ID 3
$ ./cli -p remove all scheduled tasks
It is also possible to use curl to invoke dscheduler functionality over
REST/JSON API gateway. Assuming that dscheduler and gateway applications
are running, here are some invocations to list, add and remove scheduling
entries from the same host (localhost):
curl 'http://localhost:8080/v1/scheduler/list' list currently scheduled tasks
curl -d '{"id":0, "cron":"#every 0h00m10s", "action":"ls"}' -X POST 'http://localhost:8080/v1/scheduler/add' schedule ls command for execution every 10 seconds
curl -d '{"id":0, "cron":"0 30 * * * *", "action":"ls -l"}' -X POST 'http://localhost:8080/v1/scheduler/add' schedule ls -l to run every 30 minutes
curl -d '{"id":2}' -X POST 'http://localhost:8080/v1/scheduler/remove' remove task with ID 2.
curl -X POST 'http://localhost:8080/v1/scheduler/removeall' remove all scheduled tasks
All changes are automatically saved in file.
Thoughts on scheduler service discovery
In large deployment scenarios (like hundreds of hosts) it might be
challenging problem to find out all IP addresses and ports where scheduler
service is started. It would be pretty easy to add support for Zeroconf
(Bonjour/Avahi) technology to simplify service discovery. As alternative, it
might be possible to implement something similar to CORBA Naming Service
where running services register themself and location of naming service is
well known. We decide to collect feedback before deciding for particular
service discovery implementation. So your input very welcome!
So I am considering HyperLedger Fabric to use with an application I have written in C++. From my understanding, the interactions i.e. posting retrieving data is all done in chaincode, in all of the examples I have seen this is invoked by using the CLI interface docker container.
I simply want to be able to store data produced by my application on a blockchain.
My question is how do I invoke the chaincode externally, surely this is something that is able to be done. I saw that there was a REST SDK but this is no longer supported so I don't want to go near it, to be honest. What other options are available??
Thanks!
There are two official SDKs you can try out.
Fabric Java SDK
Node JS SDK
As correctly mentioned by #Ajaya Mandal, you can use SDKs to automate the invoking process. For example, you can start the node app as written in app.js of balance transfer example and you can hit the API like it is shown in ./testAPI.sh file.
echo "POST invoke chaincode on peers of Org1 and Org2"
echo
VALUES=$(curl -s -X POST \
http://localhost:4000/channels/mychannel/chaincodes/mycc \
-H "authorization: Bearer $ORG1_TOKEN" \
-H "content-type: application/json" \
-d "{
\"peers\": [\"peer0.org1.example.com\",\"peer0.org2.example.com\"],
\"fcn\":\"move\",
\"args\":[\"a\",\"b\",\"10\"]
}")
Here you can add your arguments and pass it as you wish. You can use this thread to see how you can pass an HTTP request from C++.