Istio 0.8. Trace istio objects and rules applied in a request - istio

Is there any way in v0.8 to trace the istio objects that apply in a request.
Why the request has follow a specific path. For example it goes through the ingress -> gateway -> virtual service -> destination rule -> pod -> repeat...
And the applied rules in each object.
Thank you.

There is a new project called Kiali that is built to achieve the goal of leveraging observability for service meshes. Kiali is based on Istio (they are working really close to istio community) and using always its latest version.
Kiali has a service graph where users can see which istio objects are applied to each service. Also, for each service, you can see more details of those istio objects.
I'll put here two screenshots as sneak peak:
Here the service list, where you can see that v3 and v2 nodes has a Circuit breaker (lightning symbol) and reviews service has a Virtual Service for routing request between v1 and v2. Note that the graph also shows health of all its services and connection between them.
From service graph, you can navigate to each service detail page where you can find, among other things, the definitions of its attached istio objects.
For more information about Kiali, check its webpage: kiali.io and its github: kiali/kiali

Related

GCP API Gateway - Tracing not propagated

Traicing is currently not supported for API Gateway based on this documentation
However, when the trace is "started" by some component in front of that e.g. LB those traces/headers are not visible in the log either.
API GW is ESP v2. I found following startup options available
Is there a way how to pass those startup options to API gateway for example in terraform?
CORS and other configuration would be the same.
The only way I could think of was OpenAPI extension where I don't see such option.
Propagating trace context to stackdriver logs I consider as bare minimum for the purpose of supportability.
Thanks for the help

make apigee x setup more secure

We plan to move from Apigee Edge to Apigee X. Unfortunately I still do have a open security related question where I do not find any suitable informations.
Do I need an additional service for Intrusion Detection like Cloud IDS or is that already built in Apigee X?
For the best practices there could be no. of ways but, in general the service would be the same and Apigee x is different from Apigee in multiple ways.
You can deploy 50 proxies to an environment.
API proxies are immutable when they are deployed.
You can do your own configuration/setup in Apigee.
By using a customer encryption key all the KVM’s are encrypted.
Admin authentication and Admin API endpoints are different.
GCP IAM and RBAC govern admin identity and operators.
You can have look at this document.

How to bypass jwt policy within mesh using istio

I am setting up the end user authentication using istio. I have service A and service B in my mesh and service B is applied the jwt policy so that for requests out the cluster will need the authorization token to access.
However, I found if service A need to access service B, it also returns 401 means need the token, how I can bypass the authentication within the mesh and apply it only for traffic out of the mesh?
I don't think that it is feasible to arrange access to the same K8s target service with different authentication approaches, assuming that external mesh visitors are entitling with JWT token against this service, however internal mesh clients should not respect this policy rule.
Looking deeper into the Origin authentication (End-user authentication) design aspects you may find that Istio Policy strongly relies on the target workloads, representing particular K8s service object. Therefore, you can't change JWC policy behavior depending on the way how the initial request reaches the target k8s service.
I would say that you can apply JWT policy on the top istio-ingressgateway service level, thus all outward network requests to the mesh services will be authenticated on this stage, besides mesh service-to-service communication discovered by K8s system, that can be secured by mTLS transport channel. But this solution requires to re-consider current microservices design affecting all the entire mesh services authorization and authentication methods.

WSO2 API Manager v1.8.0 - Clustering

I have a question on WSO2 API Manager Clustering. I have gone through the deployment documentation in detail and understand the distributed deployment concept where in one can seggregate the publisher, store, key manager and gateway. But as per my asessment, that makes the deployment architecture pretty complex to maintain. So I would like to have a simpler deployment.
What I have tested is to simply have two different instances of the WSO2 API Manager to run in two different boxes pointing to the same underlying data sources in MySQL. What I have seen is that, the API calls work perfectly and the tokens obtained from one WSO2 instance would work for API invocation on the other API Manager instance. The only issue with this model is that we need to deploy the APIs from individual publisher components for as many WSO2 API Manager instances that are running. I am fine to do that since the publishing will be done by one single small team. We will have a hardware load balancer in front having the API endpoint URLs and token endpoint URLs for both the API managers and the harware LB will do the load balancing.
So my question is - are there any problems in following this simple approach from the RUNTIME perspective? Does the clustering add any benefit from RUNTIME perspective for WSO2 API Manager?
Thank you.
Your approach has following drawbacks (there can be more which I do not know);
It is not scalable. Meaning - you can't independently scale (adding more instances of) store or publisher or gateway or key manager.
Distributed throttling won't work. It will lead to throttling inconsistencies since the throttling replication won't happen if you don't enable clustering. Lets say you define 'Gold' tier for an API. Doesn't matter how many gateway instances you are using, a user should be restricted to access no more than 20req/min to this API. This should have been implemented based on a distributed counter (not sure the exact implementation details). So if you don't enable clustering, one gateway node doesn't know the number of requests served by other gateway nodes. So each gateway node will have their own throttle counter. Meaning - a user might be able to access your API more than 20req/min. So this is one of the throttling inconsistencies. Further, lets say one gateway node is throttled out a user but the other gateway node is not. Now, if your LB routes the request to 1st gateway node, user will not be able to access the API. If your LB routes the request to 2nd gateway node, user will be able to access the API. This is another instance of throttling inconsistency. To overcome all these issues, you just need to replicate the throttling across all the gateway nodes by enabling clustering.
Distributed caching won't work. For example, API Key validation information are cached. If you revoke a token in one API Manager node, cache will be cleared in that node. So a user can't use revoked token via that API Manager node, BUT he is able to use the token via the other API Manager node until the cache is invalidated (I guess 15 min by default). This is just one instance where things can go wrong if you don't cluster your API Manager instances. To solve these issues, you just need to enable clustering, then the cache will be in sync across the cluster. Read this doc for more details on various caching available in WSO2 API Manager.
You will be having several issues if you don't have above features. WSO2 highly recommends distributed deployment in production.

WSO2 API Manager cluster Key Manager

I am setting up the API Manager in a cluster and have one version of the store and one version of the publisher which are clustered so they update each other on change. I also have the gateway setup up in a master and worker cluster. All of this I found out how to do on the wso2 site. The issue is I want to cluster the key manager as well for higher load but I can't find any documentation on how to cluster the key manager specifically. I assume it's not just a case of running more than one behind a load balancer as they need to know when the tokens etc have changed?
Any help would be appreciated
Please follow this documentation on API Manager clustering. Please follow Configuring the connections among the components -> Key Manager section and Configuring component features section accordingly. This blog post explains when IS is used as Key Manager. But the explanation might be helpful to you to understand when using several urls.