GKE with Identity Aware Proxy returns Error code 9 - google-cloud-platform

I have a dockerized flask application that running on kuberneetes in Google Cloud Platform with Identity-Aware Proxy enabled. I can run a "Hello World" website but when I try to use signed JWT headers then problems occur.
In my browser I am presented with
There was a problem with your request. Error code 9
My app is like this example and I use gunicorn to run the app. It seems that trouble happens in the first line
jwt = request.headers.get('x-goog-iap-jwt-assertion')
but that just makes no sense to me. But I can return a string before that line but not after. Any suggestions?
Details on the current kubernetes cluster
apiVersion: apps/v1
kind: Deployment
metadata:
name: internal-tools-app
spec:
selector:
matchLabels:
app: internal-tools
template:
metadata:
labels:
app: internal-tools
spec:
containers:
- name: internal-web-app
image: <<MY_IMAGE>>
---
apiVersion: cloud.google.com/v1beta1
kind: BackendConfig
metadata:
name: internal-tools-backend-config
namespace: default
spec:
iap:
enabled: true
oauthclientCredentials:
secretName: internal-tools-oauth
---
apiVersion: v1
kind: Service
metadata:
name: internal-tools-service
annotations:
beta.cloud.google.com/backend-config: '{"default": "internal-tools-backend-config"}'
spec:
type: NodePort
selector:
app: internal-tools
ports:
- name: it-first-port
protocol: TCP
port: 8080
targetPort: 8080
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.global-static-ip-name: internal-tools-ip
ingress.gcp.kubernetes.io/pre-shared-cert: "letsencrypt-internal-tools"
name: internal-tools-ingress
spec:
rules:
- host: <<MY_DOMAIN>>
http:
paths:
- backend:
serviceName: internal-tools-service
servicePort: it-first-port
EDIT
Further investigations show
ImportError: Error loading shared library libssl.so.45: No such file or directory (needed by /usr/local/lib/python3.6/site-packages/cryptography/hazmat/bindings/_openssl.abi3.so)
when running the following
jwt.decode(
iap_jwt, key,
algorithms=['ES256'],
audience=expected_audience)

I just fixed this error code tonight by deleting and recreating my frontend and google-managed cert objects in GCP console. It seems to happen when I decommissioned and repurposed a cluster and deployed my app on a brand new cluster with same static IP address.

I got this answer from the Google Cloud Team bug tracker:
The Error code 9 occurs when multiple requests for re-authentication occur simultaneously (in particular, often caused by browsers reloading multiple windows/tabs at once). This flow currently requires for a temporary cookie flow to succeed first, and this cookie is unique to that flow. However if one flow starts before the previous one finishes, for example with multiple simultaneous refreshes in the same browser, this will cause the error you saw, and cause users to face that auth page.
You can try below options to overcome the issue
reboot 1 browser
clear cookies
better handling of sessions implementing
⁠session handlers
https://issuetracker.google.com/issues/155005454

Related

How can I publicly access my application running on a Kubernetes cluster on Amazon EKS?

Kubernetes & AWS EKS newbie here.
I have deployed a simple Node.js web application onto a cluster on Amazon EKS. When I send a GET request to the root (/) route, my app responds with the message: Hello from Node.
My Deployment and Service configuration files are as follows:
eks-sample-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: eks-sample-app-deployment
spec:
replicas: 2
selector:
matchLabels:
app: eks-sample-app
template:
metadata:
labels:
app: eks-sample-app
spec:
containers:
- name: eks-sample-app-container
image: sundaray/node-server:v1
ports:
- containerPort: 8000
eks-sample-service.yaml
apiVersion: v1
kind: Service
metadata:
name: eks-sample-app-service
spec:
type: NodePort
ports:
- port: 3050
targetPort: 8000
nodePort: 31515
selector:
app: eks-sample-app
After I deployed my app, I checked the container log as shown below & I get the right response: Server listening on port 8000.
Now, I want to access my application from my browser. How do I get the URL address where I can access my app?
What you are looking for is Ingress
An API object that manages external access to the services in a cluster, typically HTTP.
https://kubernetes.io/docs/concepts/services-networking/ingress/
https://aws.amazon.com/premiumsupport/knowledge-center/eks-access-kubernetes-services/

Override all existing traffic routes

I have an nginx container that handles html content & traffic routing via a VirtualService.
I have a separate maintenance nginx container I want to display (when I'm doing maintnenece) and on this occasion, I want all traffic to be routed to this maintenance container rather than the normal one stated in the first paragraph. I don't really want to have to tweak/patch the original traffic routes so looking for a way to have some form of override traffic routing rule.
From what I have read, the order of rules is based on the creation date so that didn't really help me.
So if anyone has any ideas how I can force all traffic to be routed to a specific "maintenance" service I would really appreciate your thoughts.
I would recommand setting a version label and work with that.
First create a DestinationRule to define your different versions and how they are identified (by labels).
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: nginx-versions
spec:
host: nginx.default.svc.cluster.local
subsets:
- name: maintenance
labels:
version: maintenance
- name: v1
labels:
version: v1
Next setup your route in the VirtualService to point to v1.
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: nginx-route
spec:
hosts:
- example.com
gateways:
- mygateway
http:
- name: nginx-route
match:
- uri:
prefix: "/nginx"
route:
- destination:
host: nginx.default.svc.cluster.local
subset: v1
Now you need one Service and the two Deployments.
The selector in the service will need to match both deployments. In a normal kubernetes setup this would mean, that traffic would be routed between all workloads of both deployments. But because of istio and the version setup the traffic will only be send to the currently configured version.
The deployment with the maintenance version needs to be labeled with version: maintenance and the actual version needs to be labeled with version: v1.
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
selector:
app: nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-maintenance
spec:
replicas: 2
template:
metadata:
labels:
app: nginx
version: maintenance
spec:
containers:
- image: nginx-maintenance
[...]
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-v1
spec:
replicas: 5
template:
metadata:
labels:
app: nginx
version: v1
spec:
containers:
- image: nginx-v1
[...]
If you want the traffic to be routed to the maintenance version just change the subset statement in the VirtalService and reapply it.
If you want in-cluster traffic always be send to your v1 version for some reason, you need another VirtualService that used the mesh gateway. Otherwise cluster internal traffic will be divided between all workload (v1 and maintenance).
Alternatively you could add the mesh gateway and the host to the VirtualService from above, but than cluster internal traffic will always behave like external traffic.
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: nginx-route-in-cluster
spec:
hosts:
- nginx.default.svc.cluster.local
gateways:
- mesh
http:
- name: nginx-route
match:
- uri:
prefix: "/nginx"
route:
- destination:
host: nginx.default.svc.cluster.local
subset: v1
Furthermore you could even use more versions and test updates by sending only a portion of your traffic to the new version.
To get a better understanding and some more ideas about versioning using istio please refere to this article (it's actually quite old but the concept is still relevant).

Istio Authorization Policy IP whitelisting

Does anyone know how to do IP whitelisting properly with Istio Authorization policy? I was able to follow this https://istio.io/latest/docs/tasks/security/authorization/authz-ingress/ to setup whitelisting on the gateway. However, is there a way to do this on a specific workload with selector? like this:
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: app-ip-whitelisting
namespace: foo
spec:
selector:
matchLabels:
app: app1
rules:
- from:
- source:
IpBlocks:
- xx.xx.xx.xx
I was not able to get this to work. And I am using Istio 1.6.8
I'm running Istio 1.5.6 and the following is working (whitelisting) : only IP adresses in ipBlocks are allowed to execute for the specified workload, other IP's get response code 403. I find the term ipBlocks confusing : it is not blocking anything. If you want to block certain ip's (blacklisting) you 'll need to use notIpBlocks
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: peke-echo-v1-ipblock
namespace: peke-echo-v1
spec:
selector:
matchLabels:
app: peke-echo-v1
version: v1
rules:
- from:
- source:
ipBlocks:
- 173.18.180.128
- 173.18.191.159
- 173.20.58.39
ipBlocks in lower camelcase
Sometimes it takes a while before the policy is effective.

Sonar cannot be access via istio virtual service but can be locally accessed after port forwarding

I am trying to implement SonarQube in a Kubernetes cluster. The deployment is running properly and is also exposed via a Virtual Service. I am able to open the UI via the localhost:port/sonar but I am not able to access it through my external ip. I understand that sonar binds to localhost and does not allow access from outside the remote server. I am running this on GKE with a MYSQL database. Here is my YAML file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: sonarqube
namespace: sonar
labels:
service: sonarqube
version: v1
spec:
replicas: 1
template:
metadata:
name: sonarqube
labels:
name: sonarqube
spec:
terminationGracePeriodSeconds: 15
initContainers:
- name: volume-permission
image: busybox
command:
- sh
- -c
- sysctl -w vm.max_map_count=262144
securityContext:
privileged: true
containers:
- name: sonarqube
image: sonarqube:6.7
resources:
limits:
memory: 4Gi
cpu: 2
requests:
memory: 2Gi
cpu: 1
args:
- -Dsonar.web.context=/sonar
- -Dsonar.web.host=0.0.0.0
env:
- name: SONARQUBE_JDBC_USERNAME
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: username
- name: SONARQUBE_JDBC_PASSWORD
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: password
- name: SONARQUBE_JDBC_URL
value: jdbc:mysql://***.***.**.*:3306/sonar?useUnicode=true&characterEncoding=utf8
ports:
- containerPort: 9000
name: sonarqube-port
---
apiVersion: v1
kind: Service
metadata:
labels:
service: sonarqube
version: v1
name: sonarqube
namespace: sonar
spec:
selector:
name: sonarqube
ports:
- name: http
port: 80
targetPort: sonarqube-port
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: sonarqube-internal
namespace: sonar
spec:
hosts:
- sonarqube.staging.jeet11.internal
- sonarqube
gateways:
- default/ilb-gateway
- mesh
http:
- route:
- destination:
host: sonarqube
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: sonarqube-external
namespace: sonar
spec:
hosts:
- sonarqube.staging.jeet11.com
gateways:
- default/elb-gateway
http:
- route:
- destination:
host: sonarqube
---
The deployment completes successfully. My exposed services gives a public ip that has been mapped to the host url but I am unable to access the service at the host url.
I need to change the mapping such that sonar binds with the server ip but I am unable to understand how to do that. I cannot bind it to my cluster ip, neither to my internal or external service ip.
What should I do? Please help!
I had the same issue recently and I managed to get this resolved today.
I hope the following solution will work for anyone facing the same issue!.
Environment
Cloud Provider: Azure - AKS
This should work regardless of whatever provider you use.
Istio Version: 1.7.3
K8 Version: 1.16.10
Tools - Debugging
kubectl logs -n istio-system -l app=istiod
logs from Istiod and events happening in the control plane.
istioctl analyze -n <namespace>
This generally gives you any warnings and errors for a given namespace.
Lets you know if things are misconfigured.
Kiali - istioctl dashboard kiali
See if you are getting inbound traffic.
Also, shows you any misconfigurations.
Prometheus - istioctl dashboard prometheus
query metric - istio_requests_total. This shows you the traffic going into the service.
If there's any misconfiguration you will see the destination_app as unknown.
Issue
Unable to access sonarqube UI via external IP, but accessible via localhost (port-forward).
Unable to route traffic via Istio Ingressgateway.
Solution
Sonarqube Service Manifest
apiVersion: v1
kind: Service
metadata:
name: sonarqube
namespace: sonarqube
labels:
name: sonarqube
spec:
type: ClusterIP
ports:
- name: http
port: 9000
targetPort: 9000
selector:
app: sonarqube
status:
loadBalancer: {}
Your targetport is the container port. To avoid any confusion just assign the service port number as same as the service targetport.
The port name is very important here. “Istio required the service ports to follow the naming form of ‘protocol-suffix’ where the ‘-suffix’ part is optional” - KIA0601 - Port name must follow [-suffix] form
Istio Gateway and VirtualService manifest for sonarqube
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: sonarqube-gateway
namespace: sonarqube
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 9000
name: http
protocol: HTTP
hosts:
- "XXXX.XXXX.com.au"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: sonarqube
namespace: sonarqube
spec:
hosts:
- "XXXX.XXXX.com.au"
gateways:
- sonarqube-gateway
http:
- route:
- destination:
host: sonarqube
port:
number: 9000
Gateway protocol must be set to HTTP.
Gateway Server Port and VirtualService Destination Port is the same. If you have different app Service Port, then your VirtualService Destination Port number should match the app Service Port. The Gateway Server Port should match the app Service Targetport.
Now comes to the fun bit! The hosts. If you want to access the service outside of the cluster, then you need to have your host-name (whatever host-name that you want to map the sonarqube server) as an DNS A record mapped to the External Public IP address of the istio-ingressgateway.
To get the EXTERNAL-IP address of the ingressgateway, run kubectl -n istio-system get service istio-ingressgateway.
If you do a simple nslookup (run - nslookup <hostname>), The IP address you get must match with the IP address that is assigned to the istio-ingressgateway service.
Expose a new port in the ingressgateway
Note that your sonarqube gateway port is a new port that you are introducing to Kubernetes and you’re telling the cluster to listen on that port. But your load balancer doesn’t know about this port. Therefore, you need to open the specified gateway port on your kubernetes external load balancer. Ref - Info
You don’t need to manually change your load balancer service. You just need to update the ingress gateway to include the new port, which will update the load balancer automatically.
You can identify if the port is causing issues by running istioctl analyze -n sonarqube. You should get the following warning;
[33mWarn[0m [IST0104] (Gateway sonarqube-gateway.sonarqube) The gateway refers to a port that is not exposed on the workload (pod selector istio=ingressgateway; port 9000) Error: Analyzers found issues when analyzing namespace: sonarqube. See https://istio.io/docs/reference/config/analysis for more information about causes and resolutions.
You should get the corresponding error in the control plane. Run kubectl logs -n istio-system -l app=istiod.
At this point you need to update the Istio ingressgateway service to expose the new port. Run kubectl edit svc istio-ingressgateway -n istio-system and add the following section to the ports.
Bypass creating a new port
In the previous section you saw how to expose a new port. This is optional and depending on your use case.
In this section you will see how to use a port that is already exposed.
If you look at the service of the istio-ingressgateway. You can see that there are default ports exposed. Here we are going to use port 80.
Your setup will look like the following;
To void specifying the port with your host name just add match uri prefix, as shown in the virtualservice manifest.
Time for testing
If everything works up to this point as expected, then you are good to go.
During testing I made one mistake by not specifying the port. If you get 404 status, Which is still a good thing, in this way you can verify what server it is using. If you setup things correctly, it should use the istio-envoy server, not the nginx.
Without specifiying the port. This will only work if you add the match uri prefix.
Donot pass argument just try running without it once working for me.
This is how my deployment file hope helpful
apiVersion: v1
kind: Service
metadata:
name: sonarqube-service
spec:
selector:
app: sonarqube
ports:
- protocol: TCP
port: 9000
targetPort: 9000
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: sonarqube
name: sonarqube
spec:
replicas: 1
template:
metadata:
labels:
app: sonarqube
spec:
containers:
- name: sonarqube
image: sonarqube:7.1
resources:
requests:
memory: "1200Mi"
cpu: .10
limits:
memory: "2500Mi"
cpu: .50
volumeMounts:
- mountPath: "/opt/sonarqube/data/"
name: sonar-data
- mountPath: "/opt/sonarqube/extensions/"
name: sonar-extensions
env:
- name: "SONARQUBE_JDBC_USERNAME"
value: "root" #Put your db username
- name: "SONARQUBE_JDBC_URL"
value: "jdbc:mysql://192.168.112.4:3306/sonar?useUnicode=true&characterEncoding=utf8&rewriteBatchedStatements=true" #DB URL
- name: "SONARQUBE_JDBC_PASSWORD"
value : password
ports:
- containerPort: 9000
protocol: TCP
volumes:
- name: sonar-data
persistentVolumeClaim:
claimName: sonar-data
- name: sonar-extensions
persistentVolumeClaim:
claimName: sonar-extensions

kube-controller-manager outputs an error "cannot change NodeName"

I use kubernetes on AWS with CoreOS & flannel VLAN network.
(followed this guide https://coreos.com/kubernetes/docs/latest/getting-started.html)
k8s version is 1.4.6.
And I have the following node-exporter daemon-set.
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: node-exporter
labels:
app: node-exporter
tier: monitor
category: platform
spec:
template:
metadata:
labels:
app: node-exporter
tier: monitor
category: platform
name: node-exporter
spec:
containers:
- image: prom/node-exporter:0.12.0
name: node-exporter
ports:
- containerPort: 9100
hostPort: 9100
name: scrape
hostNetwork: true
hostPID: true
When I run this, kube-controller-manager outputs an error repeatedly as below:
E1117 18:31:23.197206 1 endpoints_controller.go:513]
Endpoints "node-exporter" is invalid:
[subsets[0].addresses[0].nodeName: Forbidden: Cannot change NodeName for 172.17.64.5 to ip-172-17-64-5.ec2.internal,
subsets[0].addresses[1].nodeName: Forbidden: Cannot change NodeName for 172.17.64.6 to ip-172-17-64-6.ec2.internal,
subsets[0].addresses[2].nodeName: Forbidden: Cannot change NodeName for 172.17.80.5 to ip-172-17-80-5.ec2.internal,
subsets[0].addresses[3].nodeName: Forbidden: Cannot change NodeName for 172.17.80.6 to ip-172-17-80-6.ec2.internal,
subsets[0].addresses[4].nodeName: Forbidden: Cannot change NodeName for 172.17.96.6 to ip-172-17-96-6.ec2.internal]
Just for information, despite from this error message, node_exporter is accessible on e.g.) 172-17-96-6:9100 . My nodes are in a private network including k8s master.
But these logs are output too many and makes it difficult to see other logs by eyes from our log console. Could I see how to resolve this error?
Because I built my k8s cluster from scratch, cloud-provider=aws flag was not activated at first and I recently turned it on, but not sure if it's related to this issue.
It looks this is caused by my another manifest file
apiVersion: v1
kind: Service
metadata:
name: node-exporter
labels:
app: node-exporter
tier: monitor
category: platform
annotations:
prometheus.io/scrape: 'true'
spec:
clusterIP: None
ports:
- name: scrape
port: 9100
protocol: TCP
selector:
app: node-exporter
type: ClusterIP
I thought this is necessary to expose node-exporter daemon-set above, but it could rather introduce some sort of conflict when I set hostNetwork: true in a daemon-set (actually, a pod) manifest. I'm not 100% certain though, after I delete this service the error disappears while I can still access to 172-17-96-6:9100 from outside of the k8s cluster.
I just followed by this post when setting prometheus and node-exporter,
https://coreos.com/blog/prometheus-and-kubernetes-up-and-running.html
in case others face with the same problem, I'm leaving my comment here.