GKE Ingress is not working with cert-manager ssl secrets - google-cloud-platform

I am trying to get letsencrypt work with GKE LB, I know there are GCP Managed Certs but it will not work with internal LB as the challenge will not get passed. Letsencrypt DNS certification using cert-manager is there and ready to be used.
❯ k get secrets letsencrypt-prod -o yaml
apiVersion: v1
data:
tls.key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBdlVZTVhXdGNZZUJpMkdadzljRFRLNzY==
kind: Secret
metadata:
creationTimestamp: "2021-01-24T15:03:39Z"
name: letsencrypt-prod
namespace: elastic-system
resourceVersion: "3636289"
selfLink: /api/v1/namespaces/elastic-system/secrets/letsencrypt-prod
uid: f4bec5a9-d3b5-4f4a-9ec6-01a4ce3ba47c
type: Opaque
spec:
tls:
- hosts:
- staging.example.com
- staging2.example.com
secretName: letsencrypt-prod
GCP Reporting this error Error syncing to GCP: error running load balancer syncing routine: error getting secrets for Ingress: secret "letsencrypt-prod" does not specify cert as string data
can anybody help me with what it is missing?

As per this, you must provide a valid format for GCP, like this from your already provided Let's Encrypt valid certs:
kubectl create secret generic letsencrypt-prod --from-file=tls.crt="cert.pem" --from-file=tls.key="privkey.pem" --dry-run -o yaml > output
kubectl apply -f output
Also, (it seems you are already using it, but better safe than sorry), you must define this in the tls section of your Ingress as per this

Actually, it is missed in doc or I am missing as example uses the same name everywhere as metadata.
---
apiVersion: cert-manager.io/v1alpha2
kind: Certificate
metadata:
name: cert-example
namespace: example
spec:
secretName: REAL_NAME_OF_SECRET << This need to include in ingress.
issuerRef:
name: letsencrypt-prod
dnsNames:
- 'staging.domain.com'
- '*.staging.domain.com'
so REAL_NAME_OF_SECRET you should put in ingress or anywhere, where you want to use tls.crt or tls.key.

Related

How do I present letsencrypt certificates to Kubernetes nginx (GKE)?

I am learning the Google Cloud platform, trying to implement my first project and am getting lost in the tutorials. I am stuck at the trying to implement an nginx ingress. My ingress is stuck in CrashLoopBackoff and the logs show the following error.
I know how to do this task with DockerCompose, but not here.
Where do I start?
1#1: cannot load certificate "/etc/letsencrypt/live/blah.com/fullchain.pem": BIO_new_file() failed (SSL: error:02001002:system library:fopen:No such file or directory:fopen('/etc/letsencrypt/live/blah.com/fullchain.pem','r') error:2006D080:BIO routines:BIO_new_file:no such file)
nginx: [emerg] cannot load certificate "/etc/letsencrypt/live/blah.com/fullchain.pem": BIO_new_file() failed (SSL: error:02001002:system library:fopen:No such file or directory:fopen('/etc/letsencrypt/live/blah.com/fullchain.pem','r') error:2006D080:BIO routines:BIO_new_file:no such file)
I am not yet certain this is helpful, but I have set up the Certificate Authority Service (https://cloud.google.com/certificate-authority-service/docs/best-practices).
Instead of using that and following setup of GCP CA setup i would suggest using cert-manager with the ingress.
Cert-manager will get the TLS cert from let's-encrypt CA , cert-manager will create the secret into k8s and store verified certificate into a secret.
You can attach secret with the ingress, as per host and use it.
Cert-manager installation
YAML example :
apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: cluster-issuer-name
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: harsh#example.com
privateKeySecretRef:
name: secret-name
solvers:
- http01:
ingress:
class: nginx-class-name
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx-class-name
cert-manager.io/cluster-issuer: cluster-issuer-name
nginx.ingress.kubernetes.io/rewrite-target: /
name: example-ingress
spec:
rules:
- host: sub.example.com
http:
paths:
- path: /api
backend:
serviceName: service-name
servicePort: 80
tls:
- hosts:
- sub.example.com
secretName: secret-name
You can read this blog for ref : https://medium.com/#harsh.manvar111/kubernetes-nginx-ingress-and-cert-manager-ssl-setup-c82313703d0d

Google Managed Certificate does not show status

I tried to create a managed certificate for my ingress with this yaml:
---
apiVersion: "networking.gke.io/v1beta1"
kind: "ManagedCertificate"
metadata:
name: "example-cert-webapi"
spec:
domains:
- "foobar.domain.com"
It was successfully created but when I try to describe the said managed certificate using this command:
kubectl describe managedcertificate example-cert-api
It does not show the status. I was expecting that it will be in provisioning status but it the output of the describe command does not show the status. Below is the describe output:
Name: example-cert-webapi
Namespace: default
Labels: <none>
Annotations: <none>
API Version: networking.gke.io/v1
Kind: ManagedCertificate
Metadata:
Creation Timestamp: 2020-09-27T08:31:12Z
Generation: 1
Resource Version: foobarResourceVersion
Self Link: fooBarSelfLink
UID: fooBarUID
Spec:
Domains:
foobar.domain.com
Events: <none>
I have replaced with foobar the entries which are think are sensitive data.
I have also a Cloud DNS setup which corresponds to the domains which I used in the certificate.
Has anyone experienced the same situation? When my ingress deployment finished, the SSL certificate did not take effect.
Thanks in advance!
We noticed the same issue since yesterday. I can confirm that downgrading to 1.16 solved the problem.
Edit: the issue is created at Google: https://issuetracker.google.com/issues/169595857

Enabling SSL on GKE endpoints not working correctly

I created API on GKE using cloud endpoints. It is working fine without Https You can try it here API without Https
I followed the instructions which mention here Enabling SSL for cloud endpoint after setup everything which is mention in this page I'm able to access my endpoints with Https but with a warning.
Your connection is not private - Back to Safety (Chrome)
Check it here API with Https
Can you please let me know what I'm missing
Update
I'm using Google-managed SSL certificates for cloud endpoints in GKE.
I followed the steps which are mention in this doc but not able to successfully add SSL Certificate.
When I go in my cloud console I see
Some backend services are in UNKNOWN state
Here are my development yaml's
deployment.yaml
apiVersion: v1
kind: Service
metadata:
name: quran-grpc
spec:
ports:
- port: 81
targetPort: 9000
protocol: TCP
name: rpc
- port: 80
targetPort: 8080
protocol: TCP
name: http
- port: 443
protocol: TCP
name: https
selector:
app: quran-grpc
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: quran-grpc
spec:
replicas: 1
selector:
matchLabels:
app: quran-grpc
template:
metadata:
labels:
app: quran-grpc
spec:
volumes:
- name: nginx-ssl
secret:
secretName: nginx-ssl
containers:
- name: esp
image: gcr.io/endpoints-release/endpoints-runtime:1
args: [
"--http_port=8080",
"--ssl_port=443",
"--http2_port=9000",
"--backend=grpc://127.0.0.1:50051",
"--service=quran.endpoints.utopian-button-227405.cloud.goog",
"--rollout_strategy=managed",
]
ports:
- containerPort: 9000
- containerPort: 8080
- containerPort: 443
volumeMounts:
- mountPath: /etc/nginx/ssl
name: nginx-ssl
readOnly: true
- name: python-grpc-quran
image: gcr.io/utopian-button-227405/python-grpc-quran:5.0
ports:
- containerPort: 50051
ssl-cert.yaml
apiVersion: networking.gke.io/v1beta1
kind: ManagedCertificate
metadata:
name: quran-ssl
spec:
domains:
- quran.endpoints.utopian-button-227405.cloud.goog
---
apiVersion: v1
kind: Service
metadata:
name: quran-ingress-svc
spec:
selector:
name: quran-ingress-svc
type: NodePort
ports:
- protocol: TCP
port: 80
targetPort: 8080
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: quran-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: 34.71.56.199
networking.gke.io/managed-certificates: quran-ssl
spec:
backend:
serviceName: quran-ingress-svc
servicePort: 80
Can you please let me know what I'm doing wrong.
Your SSL configuration is working fine, and the reason you are receiving this error is because you are using a self-signed certificate.
A self-signed certificate is a certificate that is not signed by a certificate authority (CA). These certificates are easy to make and do not cost money. However, they do not provide all of the security properties that certificates signed by a CA aim to provide. For instance, when a website owner uses a self-signed certificate to provide HTTPS services, people who visit that website will see a warning in their browser.
To solve this issue you should buy a valid certificate from a trusted CA, or use Let's Encrypt that will give a certificated valid for 90 days, and after this period you can renew this certificate.
If you decide to buy a SSL certificate, you can follow the document you describe to create a Kubernetes secret and use in your ingress, simple as that.
But if you don't want to buy a certificate, you could install cert-manager in your cluster, it will help you to generate valid certificates using Let's Encrypt.
Here is an example of how to use cert-manager + Let's Encrypt solution to generate valid SSL certificates:
Using cert-manager with Let's Encrypt
cert-manager builds on top of Kubernetes, introducing certificate authorities and certificates as first-class resource types in the Kubernetes API. This makes it possible to provide 'certificates as a service' to developers working within your Kubernetes cluster.
Let's Encrypt is a non-profit certificate authority run by Internet Security Research Group that provides X.509 certificates for Transport Layer Security encryption at no charge. The certificate is valid for 90 days, during which renewal can take place at any time.
I'm supossing you already have NGINX ingress installed and working.
Pre-requisites:
- NGINX Ingress installed and working
- HELM 3.0 installed and working
cert-manager install
Note: When running on GKE (Google Kubernetes Engine), you may encounter a ‘permission denied’ error when creating some of these resources. This is a nuance of the way GKE handles RBAC and IAM permissions, and as such you should ‘elevate’ your own privileges to that of a ‘cluster-admin’ before running the above command. If you have already run the above command, you should run them again after elevating your permissions:
Follow the official docs to install, or just use HELM 3.0 with the followe command:
$ kubectl create namespace cert-manager
$ helm repo add jetstack https://charts.jetstack.io
$ helm repo update
$ kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v0.14.1/cert-manager-legacy.crds.yaml
Create CLusterIssuer for Let's Encrypt: Save the content below in a new file called letsencrypt-production.yaml:
Note: Replace <EMAIL-ADDRESS> with your valid email.
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
labels:
name: letsencrypt-prod
name: letsencrypt-prod
spec:
acme:
email: <EMAIL-ADDRESS>
http01: {}
privateKeySecretRef:
name: letsencrypt-prod
server: 'https://acme-v02.api.letsencrypt.org/directory'
Apply the configuration with:
kubectl apply -f letsencrypt-production.yaml
Install cert-manager with Let's Encrypt as a default CA:
helm install cert-manager \
--namespace cert-manager --version v0.8.1 jetstack/cert-manager \
--set ingressShim.defaultIssuerName=letsencrypt-prod \
--set ingressShim.defaultIssuerKind=ClusterIssuer
Verify the installation:
$ kubectl get pods --namespace cert-manager
NAME READY STATUS RESTARTS AGE
cert-manager-5c6866597-zw7kh 1/1 Running 0 2m
cert-manager-cainjector-577f6d9fd7-tr77l 1/1 Running 0 2m
cert-manager-webhook-787858fcdb-nlzsq 1/1 Running 0 2m
Using cert-manager
Apply this annotation in you ingress spec:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
After apply cert-manager will generate the tls certificate fot the domain configured in Host: like this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-app
namespace: default
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
rules:
- host: myapp.domain.com
http:
paths:
- path: "/"
backend:
serviceName: my-app
servicePort: 80

Istio allow only specific IP CIDR and deny rest

I have a requirement where-in I would like to allow certain CIDR ranges to be able to access my service, rest all should be denied.
I have tried the Istio IP Whitelisting/Blacklisting as mentioned in the official Istio documentation.
For example 10.0.0.2/16 should get allowed and rest should be denied. This doesn't seem to work.
apiVersion: config.istio.io/v1alpha2
kind: handler
metadata:
name: whitelistip
spec:
compiledAdapter: listchecker
params:
# providerUrl: ordinarily black and white lists are maintained
# externally and fetched asynchronously using the providerUrl.
overrides: ["10.0.0.2/16"] # overrides provide a static list
blacklist: true
entryType: IP_ADDRESSES
---
apiVersion: config.istio.io/v1alpha2
kind: instance
metadata:
name: sourceip
spec:
So basically the istio 1.5.0 was released few days ago and if we check the istio docs white/black list are deprecated now.
Denials and White/Black Listing (Deprecated)
But there is actually a good news because there is new example for authorization on ingress gateway which should answer your question.
I am not able to get the real client IP hence not able to block/allow using authorization policy or IP based whitelisting.
Based on this new example which I tested myself if you want to see you'r source ip you have to change istio-ingressgateway externalTrafficPolicy from Cluster to Local.
Update the ingress gateway to set externalTrafficPolicy: local to preserve the original client source IP on the ingress gateway using the following command:
kubectl patch svc istio-ingressgateway -n istio-system -p '{"spec":{"externalTrafficPolicy":"Local"}}'
And the allow example
The following example creates the authorization policy, ingress-policy, for the Istio ingress gateway. The following policy sets the action field to ALLOW to allow the IP addresses specified in the ipBlocks to access the ingress gateway. IP addresses not in the list will be denied. The ipBlocks supports both single IP address and CIDR notation.
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: ingress-policy
namespace: istio-system
spec:
selector:
matchLabels:
app: istio-ingressgateway
action: ALLOW
rules:
- from:
- source:
ipBlocks: ["1.2.3.4", "5.6.7.0/24", "$CLIENT_IP"]
I hope this answer your question. Let me know if you have any more questions.
Other solution in Istio 1.5:
Configure external traffic:
kubectl patch svc istio-ingressgateway -n istio-system -p '{"spec":{"externalTrafficPolicy":"Local"}}'
Or in Helm installation:
--set gateways.istio-ingressgateway.externalTrafficPolicy
And you can use in any namespace like this:
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: whitelist
namespace: foo
spec:
action: ALLOW
rules:
- from:
- source:
namespaces: ["istio-system"]
when:
- key: request.headers[X-Envoy-External-Address]
values: ["1.2.3.4/24"] #Ip to allow
selector:
matchLabels:
app: httpbin

Can't connect static ip to Ingress on GKE

I am trying to connect my ingress to a static ip. I seem to be following all the tutorials, but still I cannot seem to attach my static ip to ingress. My ingress file is as follows (refering to the static ip "test-ip")
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-web
annotations:
kubernetes.io/ingress.global-static-ip-name: "test-ip"
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/add-base-url: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
spec:
rules:
- http:
paths:
- path: /api/
backend:
serviceName: api-cluster-ip-service
servicePort: 5005
- path: /
backend:
serviceName: web-cluster-ip-service
servicePort: 80
However, when I run
kubectl get ingress ingress-web
it returns
kubectl get ingress ingress-web
NAME HOSTS ADDRESS PORTS AGE
ingress-web * 80 4m
without giving the address. In the VPC network [External IP addresses
] the static ip is there, it is global, but it keeps saying: In use by None
gcloud compute addresses describe test-ip --global
gives
address: 34.240.xx.xxx
creationTimestamp: '2019-03-26T00:34:26.086-07:00'
description: ''
id: '536303927960423409'
kind: compute#address
name: test-ip
networkTier: PREMIUM
selfLink: https://www.googleapis.com/compute/v1/projects/my-project- adbc8/global/addresses/test-ip
status: RESERVED
What am I missing here?
I ran into this issue. I believe it has been fixed by this pull request.
Changing
kubernetes.io/ingress.global-static-ip-name
to
kubernetes.io/ingress.regional-static-ip-name
Worked for me.
I've spent hours trying to figure the issue out.
It simply seems like a bug with GKE.
What solved it was:
Starting ingress with no static ip
Going to cloud console on the web under VPC Network > External IP addresses
Waiting for the Ingress ip to show up
Setting is as static, and giving it a name
Adding kubernetes.io/ingress.global-static-ip-name: <ip name> Ingress yaml and applying it.
You have to make sure the IP you created in GCP is Global and not Regional in order to use the following annotation in your ingress:
kubernetes.io/ingress.global-static-ip-name
I had the same problem, but after some research and testing I managed to solve this issue. These are the steps I took:
First you need to create a Global static IP address on GCP.
I happened to use Terraform to do this eg see example below
resource "google_compute_global_address" "static" {
name = "global-test-ip"
project = var.gcp_project_id
address_type = "EXTERNAL"
}
based on this documentation: https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/compute_global_address
You could however use the GCP console to do this.
Note: I created this Global Static IP in the same GCP project as my GKE cluster.
Once I had completed the creation of the Global Static IP I then added the following annotation to the Kubernetes ingress yaml file and applied it (ie kubectl apply -f ingress.yaml):
annotations:
kubernetes.io/ingress.global-static-ip-name: "global-test-ip"
Note: it took a few minutes for the Ingress and Google Load balancer to update after I applied this ingress change.
The first thing you should check is the status of the IP, e.g.
gcloud compute addresses describe traefik --global
You should see something along the lines of:
address: 34.111.200.XXX
addressType: EXTERNAL
creationTimestamp: '2022-07-25T14:06:48.827-07:00'
description: ''
id: '5625073968713218XXX'
ipVersion: IPV4
kind: compute#address
name: traefik
networkTier: PREMIUM
selfLink: https://www.googleapis.com/compute/v1/projects/contrawork/global/addresses/traefik
status: RESERVED
Your Ingress should look something like this:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: 'gce'
kubernetes.io/ingress.global-static-ip-name: 'traefik'
name: secondary-ingress
spec:
defaultBackend:
service:
name: 'traefik'
port:
number: 80
After this is deployed, within 5 minutes you should see status change to IN USE.
If not, I would attempt to delete and re-create the Ingress resource.
If it still does not happen, then I would check the documentation if you have properly configured the cluster, e.g. Ensure that GKE cluster has "HTTP Load Balancing" enabled.