We've designed our API to use Istio JWT authentication which is mandatory and at the same time we've used the CORS. The problem is our JS code will do ajax call and HTTP Option pre-flight request will be called without JWT Authorization header. Unfornately the pre-flight request will be blocked by Istio. How to solve it?
Not sure if I understood your question correctly, but I think Service Entry will solve this.
ServiceEntry enables adding additional entries into Istio’s internal service registry, so that auto-discovered services in the mesh can access/route to these manually specified services. A service entry describes the properties of a service (DNS name, VIPs, ports, protocols, endpoints). These services could be external to the mesh (e.g., web APIs) or mesh-internal services that are not part of the platform’s service registry (e.g., a set of VMs talking to services in Kubernetes).
Service Entry for your example might look like the following:
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: external-svc-https
spec:
hosts:
- api.foobar.com
location: MESH_EXTERNAL
ports:
- number: 80
name: http
resolution: DNS
Related
I successfully deployed a simple Voila dashboard using Google Cloud Run for Anthos. However, since I created the deployment using a GitLab CI pipeline, by default the service was assigned a long and obscure domain name (e.g. http://sudoku.dashboards-19751688-sudoku.k8s.proteinsolver.org/).
I followed the instructions in mapping custom domains to map a shorter custom domain to the service described above (e.g http://sudoku.k8s.proteinsolver.org). However, while the static assets load fine from this new custom domain, the interactive dashboard does not load, and the javascript console is populated with errors:
default.js:64 WebSocket connection to 'wss://sudoku.k8s.proteinsolver.org/api/kernels/5bcab8b9-11d5-4de0-8a64-399e35258aa1/channels?session_id=7a0eed38-77bb-40e8-ad77-d05632b5fa1b' failed: Error during WebSocket handshake: Unexpected response code: 503
_createSocket # scheduler.production.min.js:10
[...]
Is there a way to get web sockets to work with custom domains? Am I doing something wrong?
TLDR, the following yaml needs to be applied to make websocket work:
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: allowconnect-cluser-local-gateway
namespace: gke-system
spec:
workloadSelector:
labels:
app: cluster-local-gateway
configPatches:
- applyTo: NETWORK_FILTER
match:
listener:
portNumber: 80
filterChain:
filter:
name: "envoy.http_connection_manager"
patch:
operation: MERGE
value:
typed_config:
"#type": "type.googleapis.com/envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager"
http2_protocol_options:
allow_connect: true
Here is the explanation.
For the custom domain feature, the request path is
client ---> istio-ingress envoy pods ---> cluster-local-gateway envoy pods ---> user's application.
Specifically for websocket request, it needs cluster-local-gateway envoy pods to support extended CONNECT feature.
The EnvoyFilter yaml enables the extended CONNECT feature by setting allow_connect to true within the cluster-local-gateway pods.
I tried it by myself, and it works for me.
I don't know anything about your GitLab CI pipeline. By default, Knative (Cloud Run for Anthos) assigns external domain names like {name}.{namespace}.example.com where example.com can be customized based on your domain.
You can find this domain at Cloud Console or kubectl get ksvc.
First try if this domain works correctly with websockets. If so, indeed it's a "custom domain" issue. (If you are not sure, please edit your title/question to not to mention "custom domains".)
Also, you need to explicitly mark your container port as h2c on Knative for websockets to work. See ports section below, specifically name: h2c:
apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
name: hello
spec:
template:
spec:
containers:
- image: gcr.io/google-samples/hello-app:1.0
ports:
- name: h2c
containerPort: 8080
I also see that the response code to your requests is HTTP 503, likely indicating a server error. Please check your application’s logs.
The Istio documentation gives an example of configuring egress using a wildcard ServiceEntry here.
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: wikipedia
spec:
hosts:
- "*.wikipedia.org"
ports:
- number: 443
name: tls
protocol: TLS
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: wikipedia
spec:
hosts:
- "*.wikipedia.org"
tls:
- match:
- port: 443
sniHosts:
- "*.wikipedia.org"
route:
- destination:
host: "*.wikipedia.org"
port:
number: 443
What benefit/difference does the VirtualService give? If I remove the VirtualService nothing seems to be affected. I am using Istio 1.6.0
The VirtualService is not really doing anything, but if you take a look at this or this istio docs.
creating a VirtualService with a default route for every service, right from the start, is generally considered a best practice in Istio.
Virtual services play a key role in making Istio’s traffic management flexible and powerful. They do this by strongly decoupling where clients send their requests from the destination workloads that actually implement them. Virtual services also provide a rich way of specifying different traffic routing rules for sending traffic to those workloads.
Service Entry adds those wikipedia sites as an entry to istio internal service registry, so auto-discovered services in the mesh can route to these manually specified services.
Usually that's used to allow monitoring and other Istio features of external services from the start, when the Virtual Service would allow the proper routing of request.
Take a look at this istio documentation.
Service Entry makes sure your mesh knows about the service and can monitor it.
Using Istio ServiceEntry configurations, you can access any publicly accessible service from within your Istio cluster.
Virtual Service manage traffic to external services and controls traffic which go to the service, which in this case is all of it.
I would say the benefit is that, you can use istio routing rules, which can also be set for external services that are accessed using Service Entry configurations. In this example, you set a timeout rule on calls to the httpbin.org service.
I already set up Google Cloud Endpoints project and can invoke http/https requests. Endpoints gives me MY_API.endpoints.MY_PROJECT.cloud.goog domain name that I can use. I'm using gRPC Cloud Endpoints with HTTP/JSON to gRPC transcoding feature.
It is deployed on Google Kubernetes Engine (deployment yaml script attached at the end).
When I'm trying to create push subscription with that URL I getting next error:
"The supplied HTTP URL is not registered in the subscription's parent
project (url="https://MY_API.endpoints.MY_PROJECT.cloud.goog/v1/path", project_id="PROJECT_ID").
My gcloud call:
gcloud pubsub subscriptions create SUB_NAME --topic=projects/MY_PROJECT/topics/MY_TOPIC --push-endpoint="https://MY_API.endpoints.MY_PROJECT.cloud.goog/v1/path"
I tried to create Cloud DNS public zone with that DNS name and set corresponding records. But I still can't verify ownership in Google Search Console.
The question is how can I set DNS TXT record for MY_API.endpoints.MY_PROJECT.cloud.goog domain to verify ownership? Or how to use Pubsub push subscription with Cloud Endpoints gRPC in other way?
I could verify ownership of domain if I have ability to change meta or headers of gRPC responses converted to HTTP. But I doubt if there is a way.
Kubernetes script I used for deployment (if it would be helpful).
apiVersion: v1
kind: Service
metadata:
name: GKE_SERVICE_NAME
spec:
ports:
# Port that accepts gRPC and JSON/HTTP2 requests over HTTP.
- port: 80
targetPort: 9000
protocol: TCP
name: http2
selector:
app: GKE_SERVICE_NAME
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: GKE_SERVICE_NAME
spec:
replicas: 1
template:
metadata:
labels:
app: GKE_SERVICE_NAME
spec:
containers:
- name: esp
image: gcr.io/endpoints-release/endpoints-runtime:1
args: [
"--http2_port=9000",
"--service=MY_API.endpoints.MY_PROJECT.cloud.goog",
"--rollout_strategy=managed",
"--backend=grpc://127.0.0.1:50051"
]
ports:
- containerPort: 9000
- name: MY_CONTAINER_NAME
image: gcr.io/MY_PROJECT/IMAGE_NAME:v1
ports:
- containerPort: 50051
Ultimately, your goal is to get Cloud Pub/Sub pushing to your container on GKE. There are a couple ways to do this
Domain ownership validation, as you've discovered:
You can try to do it with DNS, and there's a guide for configuring DNS for a cloud.goog domain.
You can try to do it with one of the non-DNS alternatives, which includes methods such as hosting certain kinds of HTML or Javascript snippets from the domain. This can be tricky, though, as I don't know how to make Cloud Endpoints serve static HTML or Javascript content. It serves responses in OpenAPI format, which is essentially JSON.
Have you tried putting the Cloud Pub/Sub subscription and the cloud.goog domain in the same project? It might already be considered a verified domain in that case.
Since you are already using Google Kubernetes Engine, use either Cloud Run, or Cloud Run on top of Google Kubernetes Engine. There is a difference between Cloud Run and Cloud Run on GKE, but both will run your Kubernetes containers. Push endpoints on Cloud Run don't require domain ownership validation (I'm not sure if this also covers Cloud Run on GKE). You may get other interesting benefits as well, as Cloud Run is essentially designed to address the very use case of serving a push endpoint from a container. For example, it will do autoscaling and monitoring for you.
We have a microservice architecture based on Kubernetes in Amazon EKS with Ambassador as API Gateway.
We have 2 Ambassadors: 1 public and 1 private. So we have services that are only accessible by services in the cluster or VPN, and we have some services that are public.
We have the need for making private some URL paths in the public services. For example, we have a public API that is accessible in api.company.com, and we want to leave all paths public like api.company.com/createuser, api.company.com/login, etc... but for other paths we want to make them private, for example: api.company.com/swagger.html.
We know that we could enable authentication for those paths in the API, but we are looking for a solution without auth.
An example of how we configure K8s service with Ambassador for public services:
apiVersion: v1
kind: Service
metadata:
annotations:
getambassador.io/config: |
---
apiVersion: ambassador/v0
kind: Mapping
name: backends_mapping
prefix: /
ambassador_id: ambassador-public
service: backends.svc:8080
host: api.mycompany.com
labels:
app: backends
name: backends
namespace: svc
spec:
ports:
- name: http-backends
port: 8080
protocol: TCP
targetPort: http-api
selector:
app: backends
type: ClusterIP
Not sure what do you mean by without auth. You will need some sort of check to serve internal content.
One approach to achieve this can be(Note this is a high level overview).
You can make the service private, do not expose this service directly.
Prefix all your internal routes with say /internal/ or /private/ prefix.
So api.company.com/swagger.html becomes api.company.com/internal/swagger.html
You can create a load balancer that points to this middleware.
Middleware(public service) will intercept all the requests. I think Nginx can be used here. If the request has /internal/ path check if it satisfies the condition(origin, internal network etc).
If the check passes, redirect to private service.
If the check fails return 403 forbidden or whatever response code that fits.
Cilium can do just what you want:http://docs.cilium.io/en/stable/policy/language/#http
Basically you can specify L7 network policies which will only allow access to some of you API paths from certain pods.
Cilium project page: https://cilium.io/
Layer 7 policies example: http://docs.cilium.io/en/stable/policy/language/#http
EKS install guide: http://docs.cilium.io/en/v1.4/gettingstarted/k8s-install-eks/?highlight=eks
Disclaimer: I am part of the team that develops Cilium.
I'am on a journey of testing Istio and at the moment I'am about to test the "canary" capabilities of routing traffic.
For my test, I created a small servicemesh composed of 5 microservices (serviceA, serviceB, serviceC, serviceD, serviceE). Each one is able to call the others. I just pass a path like A,E,C,B,B,D and the request follows this path.
In order to call my servicemesh from outside the cluster I have an Nginx Ingress Controller with an Ingress rule that point on serviceA pod
This is working fine.
The problem I'am facing concerns the traffic switching using a custom header matching like this :
kind: VirtualService
metadata:
name: ServiceA
namespace: demo
labels:
app: demo
spec:
hosts:
- service-a
http:
- route:
- destination:
host: service-a
subset: v1
- match:
- headers:
x-internal-request:
exact: true
route:
- destination:
host: service-a
subset: v2
So here, I want to try to route the traffic to the v2 version of ServiceA when I have the custom header x-internal-request set to true.
Questions :
In order to use this feature, do my services have to be aware of the x-internal-header and do they have to pass it to the next service in the request? Or they do not need to deal with it because Istio do the job for them ?
In order to use this feature, do I need to use the Istio Ingress Controller (with an Istio Gateway) instead of the Nginx Ingress Controller ?
Today, I am using Nginx Ingress Controller to expose some of my services. We choose Nginx because it has some feature likes "external authorization" that saves us a lot of work and if we need to use Istio Ingress controller instead, I'am not sure it offers the same features than Nginx.
Perhaps there is a middle path I do not see
Thank you for your help
Istio is designed to use Envoy deployed on each Pod as sidecars to intercept and proxy network traffic between microservices in service mesh.
You can manipulate with HTTP headers for requests and responses via Envoy as well. According to the official Documentation, custom headers can be added to the request/response in the following order: weighted cluster level headers, route level headers, virtual host level headers and finally global level headers. Because your Envoy proxies are deployed on each relevant service Pod as sidecar, custom HTTP header should pass to each request or response.
I would recommend using Istio Ingress Controller with its core component Istio Gateway which is commonly used for enabling monitoring and routing rules features in Istio mesh services.
There was an issue opened on GitHub about the implementation of Nginx Ingress controller in mesh services and the problem with routing requests.