How can we generate Self Managed Certificates in GCP instance? - google-cloud-platform

i am testing with Open SSL in GCP instance. and how can generate Self Managed Certificates in GCP instance.

You can make certificate and domain status active, it can take up to 30 mins for your load balancer to begin using your self-managed SSL certificate
To test this you can run the following OpenSSL command, replacing
DOMAIN ----------------------- with-----------------------DNS name
IP_ADDRESS-------------------with-----------------------IP address of your load balancer.
echo | openssl s_client -showcerts -servername DOMAIN -connect IP_ADDRESS:443 -verify 99 -verify_return_error
This command outputs the certificates that the load balancer presents to the client. Along with other detailed information, the output should include the certificate chain.
Verify return code: 0 (ok).
For more information you can refer to this link.

There are so many ways we can issue certificates, let's focus on K8S cluster running on Google (GKE) using a custom resource called ManagedCertificate and ingress rules.
You must own the domain name and name must be no longer than 63 characters.
Create a reserved (static) external IP address using the following
command, or use google console.
gcloud compute addresses create gke-static-ip --global
Create Managed Certificate
---
apiVersion: networking.gke.io/v1beta1
kind: ManagedCertificate
metadata:
name: gke-certificate
spec:
domains:
- DOMAIN
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: gke-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: gke-static-ip
networking.gke.io/managed-certificates: gke-certificate
spec:
backend:
serviceName: hello-world-service
servicePort: 80
in my case i use the cloud endpoints as domain name

Related

Exposing a service on EKS using NGINX ingress and issues with load balancer

I am trying to set up a service and expose it externally on EKS. I have already done it on GKE pretty easily but now AWS is giving me a hard time.
My NGINX yaml looks something like that:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myapp-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
tls:
- hosts:
- app.mydomain.com
secretName: myapp-tls
rules:
- host: app.mydomain.com
http:
paths:
- path: /
backend:
serviceName: myapp-service
servicePort: 80
And then I have my domain app.mydomain.com on Google Domains pointing at the ingress external address. There is also a cert-manager service running in order to support HTTPS.
However, while basically the same setup worked completely out of the box on GKE, EKS gives me a hard time.
From what I understand it has something to do with EKS default LoadBalancer being layer 4 in comparison to Google's layer 7 (Which explains HTTPS not working) but there is also issues with redirections of the domain as it just resolves as the ingress address instead of my desired address and thus my app doesn't show up.
The domain is registered over Google Domains and I'm creating Synthetic Records (for my subdomain) that points to my ingress external address on EKS. The same scheme works perfectly fine on GKE but here it resolves the address as the ingress address instead of my domain which results in 404 on the ingress side.
I was wondering if someone could please point me to how to properly set it up? Should I give up on nginx ingress on EKS and move onto ALB? and how to properly associate the domain?
Thank you very much in advance!
Edit:
output of kubectl describe ingress myapp-ingress:
Name: myapp-ingress
Namespace: default
Address: ********************************-****************.elb.eu-west-1.amazonaws.com
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
TLS:
myapp-tls terminates app.mydomain.com
Rules:
Host Path Backends
---- ---- --------
app.mydomain.com
/ myapp-service:80 (172.31.2.238:8000)
Annotations: cert-manager.io/cluster-issuer: myapp-letsencrypt-prod
kubernetes.io/ingress.class: nginx
Events: <none>
Should I give up on nginx ingress on EKS and move onto ALB
No. NGinX ingress controllers work perfectly well on EKS. It is possible to configure them as either layer 4 or layer 7; we use it in layer 7 mode.
Can you update your question with the output of
kubectl get ingress myapp-ingress
I think your ingress path is also incorrect. Unless I'm mistaken that's just routing the root of your app, not all uris. We use the scheme
spec:
rules:
- host: service.d.tld
http:
paths:
- path: /?(.*) # <---
backend:
serviceName: my-service
servicePort: http
Are you seeing errors in the nginx ingress controller's logs? That + kubectl events are both useful for debugging purposes.
I'd disable TLS everywhere and get your service working on http, then work stepwise on getting TLS enabled on the ingress controller.
Edit: Based on your response above,
curl -H "Host: app.mydomain.com" http://<elb-address>:80
SHOULD call through to your service behind the ingress.
How is app.mydomain.com defined? Is it a CNAME to the dns entry?

Google cloud HTTPs ERR_SSL_VERSION_OR_CIPHER_MISMATCH

I followed this tutorial https://cloud.google.com/storage/docs/hosting-static-website
But I am not able to reach the site on https because of ERR_SSL_VERSION_OR_CIPHER_MISMATCH / SSL_ERROR_NO_CYPHER_OVERLAP depending on the browser
I use managed certificate provided by google, but no browser seems to be compatible with it. I use GCP default SSL policy, but I also tried create one for testing with minimal requirements of TSL 1.0, but nothing changed.
Yes , if using google managed cert sometimes it takes time to propagate to your associate domain , so in future you could eiter use "curl" command or used dig command to verify it , sometimes it takes 24 hrs too which is maximum time.
Please verify following points:
verify your website pointing towards frontend LB
check the state of google managed cert added on the front end of LB
verify that frontend is using HTTPS and backend is using HTTP
verify your ssl cert
I'm sure it's a problem with the DNS server. If your config is correct, you have to wait a few hours more and redeploy again.
In my case, I was setting up a subdomain with different IP used in the domain.
My managed certificate it was something like this:
apiVersion: networking.gke.io/v1beta1
kind: ManagedCertificate
metadata:
name: my-certificate
spec:
domains:
- www.sub.example.com
My ingress was fine:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: st-ext-ip-prod
networking.gke.io/v1beta1.FrontendConfig: ssl-redirect
networking.gke.io/managed-certificates: my-certificate
spec: ...
The problem was the configuration in my DNS SERVER. In my certificate, I was using the domain starting with wwww but in my server, I didn't have the CNAME to support the www.sub
# A DOMAIN-IP
www CNAME example.com
sub A SUBDOMAIN-IP
www.sub CNAME sub.example.com
Doing that configuration (CNAME for www.sub) I had to wait like 5 hours (it could take more)
I had to redeploy everything from the beginning and finally, I didn't have that issue ERR_SSL_VERSION_OR_CIPHER_MISMATCH again.

Redirect URLs using Google Cloud

I have a domain (example.com) already configured in Cloud DNS. With this domain I can access microservices that are in a GKE cluster. I use istio-ingressgateway IP in CloudDNS to make the association between the cluster
Now I have another domain (newexample.com) with a custom certificate for https connections. Is there a way to redirect all the requests to newexample.com to example.com? I do not want to change anything in gke/istio configuration if possible.
Each method will require some of the reconfiguration in either GKE/Istio side.
One of the solutions to this is to have a CNAME record in a Cloud DNS and a SSL certificate with Alternative Names.
With above solution you will be able to send requests to your GKE/Istio cluster with both domain names assuming correct Istio configuration.
What is CNAME?
CNAME is a Canonical Name Record or Alias Record.
A type of resource record in the Domain Name System (DNS), that specifies that one domain name is an alias of another canonical domain name.
Example of a CNAME record:
DNS name Type TTL Data
old.domain. A 60 1.2.3.4
new.domain. CNAME 60 old.domain.
Alternative Names:
A SAN or subject alternative name is a structured way to indicate all of the domain names and IP addresses that are secured by the certificate.
Enstrustdatacard.com: What is a san and how is it used
You can create SSL cerificate create to support both:
old.domain
new.domain
There are plenty options to do that for example Let's Encrypt or Cert Manager.
Example
I've created an example to show you how to do it:
Configure DNS zone in Cloud DNS
Create a basic app with a service
Create a certificate for example app
Create Istio resources to allow connections to example app
Test
Configure DNS zone in Cloud DNS
You will need to have 2 records:
A record with IP of your Ingress Gateway and name: old.domain
CNAME record pointing to old.domain with name: new.domain
Please take a look on official documentation: Cloud.google.com: DNS: Records
Create a basic app with a service
Below is example app with a service which will respond with a basic hello:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-dp
spec:
selector:
matchLabels:
app: hello-dp
replicas: 1
template:
metadata:
labels:
app: hello-dp
spec:
containers:
- name: hello
image: "gcr.io/google-samples/hello-app:2.0"
env:
- name: "PORT"
value: "50001"
---
apiVersion: v1
kind: Service
metadata:
name: hello-sv
spec:
selector:
app: hello-dp
ports:
- name: hello-port
protocol: TCP
port: 50001
targetPort: 50001
type: ClusterIP
Create a certificate for example app
As said previously, certificate with Alternative Names can be created with Let's Encrypt. I created it with:
GCE VM with Ubuntu 16.04
Open port 80
Domain name old.domain pointing to public ip address of a VM
Guide: Linode.com: Docs: Install let's encrypt to create a SSL certificate
Command to create certificate:
$ ./letsencrypt-auto certonly --standalone -dold.domain-dnew.domain
Certificate created in /etc/letsencrypt/archive/ used in creating tls secret for GKE with command:
$ kubectl create secret tlsssl-certificate--cert cert1.pem --key privkey1.pem
Please have in mind that this certificate was created only for testing purposes and I strongly advise using dedicated solution like: Cert-manager
PS: If you used this method please revert back changes in the Cloud DNS to point the Istio gateway.
Create Istio resources to allow connections to example app
Below are example Istio resources allowing connections to example app with support for HTTPS:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: hello-gw
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
credentialName: ssl-certificate
hosts:
- "old.domain"
- "new.domain"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: hello-vs
spec:
hosts:
- "old.domain"
- "new.domain"
gateways:
- hello-gw
http:
- route:
- destination:
host: hello-sv
port:
number: 50001
Please take a specific look on:
tls:
mode: SIMPLE
credentialName: ssl-certificate
This part will ensure that connection to the cluster will use HTTPS
Additionally:
hosts:
- "old.domain"
- "new.domain"
Above definition in both resources will allow only connections with specified domains.
Test
When applied all of the above resources you should be able to enter in your browser:
https://old.domain
https://new.domain
and be greeted with below message and valid SSL certificate:
Hello, world!
Version: 2.0.0
Hostname: hello-dp-5dd8b85b56-bk7zr

Can't connect static ip to Ingress on GKE

I am trying to connect my ingress to a static ip. I seem to be following all the tutorials, but still I cannot seem to attach my static ip to ingress. My ingress file is as follows (refering to the static ip "test-ip")
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-web
annotations:
kubernetes.io/ingress.global-static-ip-name: "test-ip"
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/add-base-url: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
spec:
rules:
- http:
paths:
- path: /api/
backend:
serviceName: api-cluster-ip-service
servicePort: 5005
- path: /
backend:
serviceName: web-cluster-ip-service
servicePort: 80
However, when I run
kubectl get ingress ingress-web
it returns
kubectl get ingress ingress-web
NAME HOSTS ADDRESS PORTS AGE
ingress-web * 80 4m
without giving the address. In the VPC network [External IP addresses
] the static ip is there, it is global, but it keeps saying: In use by None
gcloud compute addresses describe test-ip --global
gives
address: 34.240.xx.xxx
creationTimestamp: '2019-03-26T00:34:26.086-07:00'
description: ''
id: '536303927960423409'
kind: compute#address
name: test-ip
networkTier: PREMIUM
selfLink: https://www.googleapis.com/compute/v1/projects/my-project- adbc8/global/addresses/test-ip
status: RESERVED
What am I missing here?
I ran into this issue. I believe it has been fixed by this pull request.
Changing
kubernetes.io/ingress.global-static-ip-name
to
kubernetes.io/ingress.regional-static-ip-name
Worked for me.
I've spent hours trying to figure the issue out.
It simply seems like a bug with GKE.
What solved it was:
Starting ingress with no static ip
Going to cloud console on the web under VPC Network > External IP addresses
Waiting for the Ingress ip to show up
Setting is as static, and giving it a name
Adding kubernetes.io/ingress.global-static-ip-name: <ip name> Ingress yaml and applying it.
You have to make sure the IP you created in GCP is Global and not Regional in order to use the following annotation in your ingress:
kubernetes.io/ingress.global-static-ip-name
I had the same problem, but after some research and testing I managed to solve this issue. These are the steps I took:
First you need to create a Global static IP address on GCP.
I happened to use Terraform to do this eg see example below
resource "google_compute_global_address" "static" {
name = "global-test-ip"
project = var.gcp_project_id
address_type = "EXTERNAL"
}
based on this documentation: https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/compute_global_address
You could however use the GCP console to do this.
Note: I created this Global Static IP in the same GCP project as my GKE cluster.
Once I had completed the creation of the Global Static IP I then added the following annotation to the Kubernetes ingress yaml file and applied it (ie kubectl apply -f ingress.yaml):
annotations:
kubernetes.io/ingress.global-static-ip-name: "global-test-ip"
Note: it took a few minutes for the Ingress and Google Load balancer to update after I applied this ingress change.
The first thing you should check is the status of the IP, e.g.
gcloud compute addresses describe traefik --global
You should see something along the lines of:
address: 34.111.200.XXX
addressType: EXTERNAL
creationTimestamp: '2022-07-25T14:06:48.827-07:00'
description: ''
id: '5625073968713218XXX'
ipVersion: IPV4
kind: compute#address
name: traefik
networkTier: PREMIUM
selfLink: https://www.googleapis.com/compute/v1/projects/contrawork/global/addresses/traefik
status: RESERVED
Your Ingress should look something like this:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: 'gce'
kubernetes.io/ingress.global-static-ip-name: 'traefik'
name: secondary-ingress
spec:
defaultBackend:
service:
name: 'traefik'
port:
number: 80
After this is deployed, within 5 minutes you should see status change to IN USE.
If not, I would attempt to delete and re-create the Ingress resource.
If it still does not happen, then I would check the documentation if you have properly configured the cluster, e.g. Ensure that GKE cluster has "HTTP Load Balancing" enabled.

Reference ConfigMap / Secret in Kubernetes object metadata

I have a Kubernetes cluster provisioned at AWS with kops and I use route 53 mapper to configure ELB based on Service annotations and use namespaces for different environments dev, test, prod with configuration being defined in ConfigMap and Secret objects.
Environments have different hostname and TSL certificates:
kind: Service
apiVersion: v1
metadata:
name: http-proxy-service
labels:
dns: route53
annotations:
domainName: <env>.myapp.example.io
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: |-
arn:aws:acm:eu-central-1:44315247xxxxxxxxxxxxxxxx
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https
spec:
selector:
app: http-proxy
ports:
- name: https
port: 443
Is there a Kubernetes way to reference ConfigMap/Secret objects in the metadata section of the object descriptor so I can have only one file for all environments?
I am looking for pure Kubernetes solution not using any templating before sending file to API via kubecetl.
There is not.
FWIW, it seems nuts that that mapper was designed to pull cert data from annotations on a Service. Service objects are not otherwise secret.
The mapper should be able to consume cert data from a Secret that has well defined fields to indicate what domain should be wired with what cert data in front of what service.