Access external jwksuri behind a company proxy - istio

I am new to istio and had doubt configuring a Request authentication policy.The policy uses a jwksuri which is an external URI.The policy is applied on the istio-system namespace.The moment I apply this policy and do
>istioctl proxy-status
The ingress gateway on which the policy is applied LDS is marked stale.If I remove this policy the gateway goes back into SYNCED state.It seems this jwksuri is not accessible since we are behind a company proxy. I created Service entry to access the external jwks uri something like this
kubectl apply -f - <<EOF
apiVersion:
networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: jwksexternal
spec:
hosts:
-
authorization.company.com
ports:
- number: 443
name: https
protocol: HTTPS
resolution: DNS
location: MESH_EXTERNAL
EOF
Also tried to create one more service entry "Configuring traffic to external proxy" referring to this documentation https://istio.io/latest/docs/tasks/traffic-management/egress/http-proxy/
But this is not working.How should I configure the company proxy in Istio.
Edit this is the logs in istiod (Please note https://authorization.company.com/jwk is an external url)
2021-06-02T14:35:39.423938Z error model Failed to fetch public key from "https://authorization.company.com/jwk": Get "https://authorization.company.com/jwk": dial tcp: lookup authorization.company.com on 10.X.0.X:53: no such host
2021-06-02T14:35:39.423987Z error Failed to fetch jwt public key from "https://authorization.company.com/jwk": Get "https://authorization.company.com/jwk": dial tcp: lookup authorization.company.com on 10.X.0.X:53: no such host
2021-06-02T14:35:39.424917Z info ads LDS: PUSH for node:istio-ingressgateway-5b69b5448c-8wbt4.istio-system resources:1 size:4.5kB
2021-06-02T14:35:39.433976Z warn ads ADS:LDS: ACK ERROR router~10.X.48.X~istio-ingressgateway-5b69b5448c-8wbt4.istio-system~istio-system.svc.cluster.local-105 Internal:Error adding/updating listener(s) 0.0.0.0_8443: Provider 'origins-0' in jwt_authn config has invalid local jwks: Jwks RSA [n] or [e] field is missing or has a parse error
Not able to find a workaround for this issue. As of now embedded the jwks into the jwt rules.But this has a problem ,whenever the public key keys get rotated .The jwt rules fail. This is a proxy issue but not sure how to bypass

By default, Istio allows traffic to external systems.
See https://istio.io/latest/docs/tasks/traffic-management/egress/egress-control/#change-to-the-blocking-by-default-policy
So if the problem is that the JWKS URL can't be accessed, it is most likely not because of Istio and a ServiceEntry won't help. I guess the problem will be somewhere else, not in Istio.

Related

GKE Gateway with a wildcard Certificate Manager certificate

I'm trying to set up GKE Gateway with an HTTPS listener using a wildcard certificate managed via Certificate Manager.
The problem I'm facing is not in provisioning of the certificate, which was done successfully following the DNS Authorization tutorial and this answer. I've successfully provisioned a wildcard certificate, which is shown by gcloud certificate-manager certificates describe <cert-name> as ACTIVE and
AUTHORIZED on both the domain and its wildcard subdomain. I've also provisioned the associated Certificate Map and Map Entry (all via Terraform) and created a global static IP address and a wildcard A record for it in Cloud DNS.
However, when I try to use this cert and address in the GKE Gateway resource, the resource gets "stuck" (never reaches SYNC phase), and there's no HTTPS GCLB provisioned as seen via gcloud or Cloud Console.
Here's the config I was trying to use for it:
kind: Gateway
apiVersion: gateway.networking.k8s.io/v1beta1
metadata:
name: external-https
annotations:
networking.gke.io/certmap: gke-gateway
spec:
gatewayClassName: gke-l7-gxlb
listeners:
- name: http
protocol: HTTP
port: 80
allowedRoutes:
kinds:
- kind: HTTPRoute
- name: https
protocol: HTTPS
port: 443
allowedRoutes:
kinds:
- kind: HTTPRoute
addresses:
- type: NamedAddress
value: gke-gateway
I've tried multiple different combinations of this config, including with an explicit IPAddress, or without allowedRoutes. But no matter what I try, it doesn't seem to work. I can only see the initial ADD and UPDATE events in the output of kubectl describe gateway external-http, and there're no logs for it to be found anywhere afaik (since GKE Gateway Controller is part of the GKE Control Plane and without any logging exposed to the customers, from what I understand).
The only time I was able to make either internal or external Gateway to work is when using HTTP protocol, i.e. without certificates. Hence, I think this has to do with HTTPS, and probably more specifically with linking to the managed wildcard certificate.
Additionally, I should mention that my attempts at deploying the Gateway fail most of the time (i.e. the resource gets "stuck" in the same way), even when reusing a previously-working HTTP config. I'm not sure what the source of this flakiness is (apart from maybe some internal quota), but I imagine this is fully expected, as the service is still in Beta.
Has anyone been able to actually provision a Gateway with HTTPS listener and wildcard certs, and how?

Adding host without www not working with Ingress resource

I have a couple of questions
When we make changes to ingress resource, are there any cases where we have to delete the resource and re-create it again or is kubectl apply -f <file_name> sufficient?
When I add the host attribute without www i.e. (my-domain.in), I am not able to access my application but with www i.e. (www.my-domain.in) it works, what's the difference?
Below is my ingress resource
When I have the host set to my-domain.in, I am unable to access my application, but when i set the host to www.my-domain.in I can access the application.
my domain is on a different provider and I have added CNAME (www) pointing to DNS name of my ALB.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: eks-learning-ingress
namespace: production
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/certificate-arn: arn:aws:a982529496:cerd878ef678df
labels:
app: eks-learning-ingress
spec:
rules:
- host: my-domain.in **does not work**
http:
paths:
- path: /*
backend:
serviceName: eks-learning-service
servicePort: 80
First answering your question 1:
When we make changes to ingress resource, are there any cases where we have to delete the resource and re-create it again or is kubectl apply -f sufficient?
In theory, yes, the kubectl apply is the correct way, either it will show ingress unchanged or ingress configured.
Other valid documented option is kubectl edit ingress INGRESS_NAME which saves and apply at the end of the edition if the output is valid.
I said theory because bugs happen, so we can't fully discard it, but bug is the worst case scenario.
Now the blurrier question 2:
When I add the host attribute without www i.e. (my-domain.in), I am not able to access my application but with www i.e. (www.my-domain.in) it works, what's the difference?
To troubleshoot it we need to isolate the processes, like in a chain we have to find which link is broken. One by one:
Endpoint > Domain Provider> Cloud Provider > Ingress > Service > Pod.
DNS Resolution (Domain Provider)
DNS Resolution (Cloud Provider)
Kubernetes Ingress (Ingress > Service > Pod)
DNS Resolution
Domain Provider:
To the Internet, who answers for my-domain.in is your Domain Provider.
What are the rules for my-domain.in and it's subdomains (like www.my-domain.in or admin.my-domain.in)?
You said "domain is on a different provider and I have added CNAME (www) pointing to DNS name of my ALB."
Are my-domain.in and my-domain.in being redirected to the ALB address instinctively?
How does it handle URL subdomains? how the request is passed on to your Cloud?
Cloud Provider:
Ok, the cloud provider is receiving the request correctly and distinctly.
Does your ALB have generic or specific rules for subdomains or path requests?
Test with another host, a different VM with a web server.
Check ALB Troubleshooting Page
Kubernetes Ingress
Usually we would start the troubleshoot from this part, but since you mentioned it works with www.my-domain.in, we can presume that your service, deployment and even ingress structure is working correctly.
You can check the Types of Ingress Docs to get a few examples of how it should work.
Bottom Line: I believe your DNS has a route for www.my-domain.in but the root domain has no route to your cloud provider that's why it's only working when you are enabling the ingress for www.

Why does GCE Load Balancer behave differently through the domain name and the IP address?

A backend service happens to be returning Status 404 on the health check path of the Load Balancer. When I browse to the Load Balancer's domain name, I get "Error: Server Error/ The server encountered a temporary error", and the logs show
"type.googleapis.com/google.cloud.loadbalancing.type.LoadBalancerLogEntry"
statusDetails: "failed_to_pick_backend", which makes sense.
When I browse to the Load Balancer's Static IP, my browser shows the 404 Error Message which the underlying Kubernetes Pod returned, In other words the Load Balancer passed on the request despite the failed health check.
Why these two different behaviors?
[Edit]
Here is the yaml for the Ingress that created the Load Balancer:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress1
spec:
rules:
- host: example.com
http:
paths:
- backend:
serviceName: myservice
servicePort: 80
I did a "deep dive" into that and managed to reproduce the situation on my GKE cluster, so now I can tell that there are a few things combined here.
A backend service happens to be returning Status 404 on the health check path of the Load Balancer.
There could be 2 options (it is not clear from the description you have provided).
something like:
"Error: Server Error
The server encountered a temporary error and could not complete your request.
Please try again in 30 seconds."
This one you are geting from LoadBalancer in case HealthCheck failed for pod. The official documentation on GKE Ingress object says that
a Service exposed through an Ingress must respond to health checks from the load balancer.
Any container that is the final destination of load-balanced traffic must do one of the following to indicate that it is healthy:
Serve a response with an HTTP 200 status to GET requests on the / path.
Configure an HTTP readiness probe. Serve a response with an HTTP 200 status to GET requests on the path specified by the readiness probe. The Service exposed through an Ingress must point to the same container port on which the readiness probe is enabled.
It is needed to fix HealthCheck handling. You can check Load balancer details by visiting GCP console - Network Services - Load Balancing.
"404 Not Found -- nginx/1.17.6"
This one is clear. That is the response returned by endpoint myservice is sending request to. It looks like something is misconfigured there. My guess is that pod merely can't serve that request properly. Can be nginx web-server issue, etc. Please check the configuration to find out why pod can't serve the request.
While playing with the setup I have find an image that allows you to check if request has reached the pod and requests headers.
so it is possible to create a pod like:
apiVersion: v1
kind: Pod
metadata:
annotations:
run: fake-web
name: fake-default-knp
# namespace: kube-system
spec:
containers:
- image: mendhak/http-https-echo
imagePullPolicy: IfNotPresent
name: fake-web
ports:
- containerPort: 8080
protocol: TCP
to be able to see all the headers that were in incoming requests (kubectl logs -f fake-default-knp ).
When I browse to the Load Balancer's Static IP, my browser shows the 404 Error Message which the underlying Kubernetes Pod returned.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress1
spec:
rules:
- host: example.com
http:
paths:
- backend:
serviceName: myservice
servicePort: 80
Upon creation of such an Ingress object, there will be at least 2 backends in GKE cluster.
- the backend you have specified upon Ingress creation ( myservice one)
- the default one (created upon cluster creation).
kubectl get pods -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP
l7-default-backend-xyz 1/1 Running 0 20d 10.52.0.7
Please note that myservice serves only requests that have Host header set to example.com . The rest of requests are sent to "default backend" . That is the reason why you are receiving "default backend - 404" error message upon browsing to LoadBalancer's IP address.
Technically there is a default-http-backend service that has l7-default-backend-xyz as an EndPoint.
kubectl get svc -n kube-system -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
default-http-backend NodePort 10.0.6.134 <none> 80:31806/TCP 20d k8s-app=glbc
kubectl get ep -n kube-system
NAME ENDPOINTS AGE
default-http-backend 10.52.0.7:8080 20d
Again, that's the "object" that returns the "default backend - 404" error for the requests with "Host" header not equal to the one you specified in Ingress.
Hope that it sheds a light on the issue :)
EDIT:
myservice serves only requests that have Host header set to example.com." So you are saying that requests go to the LB only when there is a host header?
Not exactly. The LB receives all the requests and passes requests in accordance to "Host" header value. Requests with example.com Host header are going to be served on myservice backend .
To put it simple the logic is like the following:
request arrives;
system checks the Host header (to determine user's backend)
request is served if there is a suitable user's backend ( according to the Ingress config) and that backend is healthy , otherwise "Error: Server Error The server encountered a temporary error and could not complete your request. Please try again in 30 seconds." is thrown if backend is in non-healthy state;
if request's Host header doesn't match any host in Ingress spec, request is sent to l7-default-backend-xyz backend (not the one that is mentioned in Ingress config). That backend replies with: "default backend - 404" error .
Hope that makes it clear.

How to alway allow HTTP OPTION request in istio?

We've designed our API to use Istio JWT authentication which is mandatory and at the same time we've used the CORS. The problem is our JS code will do ajax call and HTTP Option pre-flight request will be called without JWT Authorization header. Unfornately the pre-flight request will be blocked by Istio. How to solve it?
Not sure if I understood your question correctly, but I think Service Entry will solve this.
ServiceEntry enables adding additional entries into Istio’s internal service registry, so that auto-discovered services in the mesh can access/route to these manually specified services. A service entry describes the properties of a service (DNS name, VIPs, ports, protocols, endpoints). These services could be external to the mesh (e.g., web APIs) or mesh-internal services that are not part of the platform’s service registry (e.g., a set of VMs talking to services in Kubernetes).
Service Entry for your example might look like the following:
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: external-svc-https
spec:
hosts:
- api.foobar.com
location: MESH_EXTERNAL
ports:
- number: 80
name: http
resolution: DNS

Istio JWT verification against JWKS with internally signed certificate

I'm attempting to configure Istio authentication policy to validate our JWT.
I set the policy and can see it takes affect. However it won't allow anything to connect. When applying the policy if I inspect the istio-pilot logs I can see it failing to retrieve the signing keys, giving a certificate error.
2018-10-24T03:22:41.052354Z error model Failed to fetch pubkey from "https://iam.company.com.au/oauth2/jwks": Get https://iam.company.com.au/oauth2/jwks: x509: certificate signed by unknown authority
2018-10-24T03:22:41.052371Z warn Failed to fetch jwt public key from "https://iam.company.com.au/oauth2/jwks "
This I assume would be due to this server using a TLS certificate signed by our corporate CA.
How do I get istio-pilot to trust certs from our CA? I have tried installing ca-certificates and including our CA public key in the Ubuntu certificates but it still won't work.
Policy:
apiVersion: "authentication.istio.io/v1alpha1"
kind: "Policy"
metadata:
name: "our-service-jwt-example"
spec:
targets:
- name: our-service
origins:
- jwt:
issuer: iam.company.com.au
audiences:
- YRhT8xWtcLrOQmqJUGPA1p6O6mUa
jwksUri: "https://iam.company.com.au/oauth2/jwks"
principalBinding: USE_ORIGIN
Pilot does the jwks resolving for the envoy. In that case, pilot needs to have the CA certificate. At the moment there is no way to add a CA cert to the pilot unless you add the cert when deploying pilot in the istio. https://github.com/istio/istio/blob/master/pilot/pkg/model/jwks_resolver.go
This has been added as of Istio 1.4:
https://github.com//istio/istio/pull/17176
You can provide an extra root certificate in PEM format in the pilot.jwksResolverExtraRootCA helm chart value (also works with IstioOperator for more recent versions of Istio) and it will create a ConfigMap containing an extra.pem file that should get mounted into the istio pilot container as /cacerts/extra.pem. From there it should get picked up automatically.