Redirect URLs using Google Cloud - google-cloud-platform

I have a domain (example.com) already configured in Cloud DNS. With this domain I can access microservices that are in a GKE cluster. I use istio-ingressgateway IP in CloudDNS to make the association between the cluster
Now I have another domain (newexample.com) with a custom certificate for https connections. Is there a way to redirect all the requests to newexample.com to example.com? I do not want to change anything in gke/istio configuration if possible.

Each method will require some of the reconfiguration in either GKE/Istio side.
One of the solutions to this is to have a CNAME record in a Cloud DNS and a SSL certificate with Alternative Names.
With above solution you will be able to send requests to your GKE/Istio cluster with both domain names assuming correct Istio configuration.
What is CNAME?
CNAME is a Canonical Name Record or Alias Record.
A type of resource record in the Domain Name System (DNS), that specifies that one domain name is an alias of another canonical domain name.
Example of a CNAME record:
DNS name Type TTL Data
old.domain. A 60 1.2.3.4
new.domain. CNAME 60 old.domain.
Alternative Names:
A SAN or subject alternative name is a structured way to indicate all of the domain names and IP addresses that are secured by the certificate.
Enstrustdatacard.com: What is a san and how is it used
You can create SSL cerificate create to support both:
old.domain
new.domain
There are plenty options to do that for example Let's Encrypt or Cert Manager.
Example
I've created an example to show you how to do it:
Configure DNS zone in Cloud DNS
Create a basic app with a service
Create a certificate for example app
Create Istio resources to allow connections to example app
Test
Configure DNS zone in Cloud DNS
You will need to have 2 records:
A record with IP of your Ingress Gateway and name: old.domain
CNAME record pointing to old.domain with name: new.domain
Please take a look on official documentation: Cloud.google.com: DNS: Records
Create a basic app with a service
Below is example app with a service which will respond with a basic hello:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-dp
spec:
selector:
matchLabels:
app: hello-dp
replicas: 1
template:
metadata:
labels:
app: hello-dp
spec:
containers:
- name: hello
image: "gcr.io/google-samples/hello-app:2.0"
env:
- name: "PORT"
value: "50001"
---
apiVersion: v1
kind: Service
metadata:
name: hello-sv
spec:
selector:
app: hello-dp
ports:
- name: hello-port
protocol: TCP
port: 50001
targetPort: 50001
type: ClusterIP
Create a certificate for example app
As said previously, certificate with Alternative Names can be created with Let's Encrypt. I created it with:
GCE VM with Ubuntu 16.04
Open port 80
Domain name old.domain pointing to public ip address of a VM
Guide: Linode.com: Docs: Install let's encrypt to create a SSL certificate
Command to create certificate:
$ ./letsencrypt-auto certonly --standalone -dold.domain-dnew.domain
Certificate created in /etc/letsencrypt/archive/ used in creating tls secret for GKE with command:
$ kubectl create secret tlsssl-certificate--cert cert1.pem --key privkey1.pem
Please have in mind that this certificate was created only for testing purposes and I strongly advise using dedicated solution like: Cert-manager
PS: If you used this method please revert back changes in the Cloud DNS to point the Istio gateway.
Create Istio resources to allow connections to example app
Below are example Istio resources allowing connections to example app with support for HTTPS:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: hello-gw
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
credentialName: ssl-certificate
hosts:
- "old.domain"
- "new.domain"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: hello-vs
spec:
hosts:
- "old.domain"
- "new.domain"
gateways:
- hello-gw
http:
- route:
- destination:
host: hello-sv
port:
number: 50001
Please take a specific look on:
tls:
mode: SIMPLE
credentialName: ssl-certificate
This part will ensure that connection to the cluster will use HTTPS
Additionally:
hosts:
- "old.domain"
- "new.domain"
Above definition in both resources will allow only connections with specified domains.
Test
When applied all of the above resources you should be able to enter in your browser:
https://old.domain
https://new.domain
and be greeted with below message and valid SSL certificate:
Hello, world!
Version: 2.0.0
Hostname: hello-dp-5dd8b85b56-bk7zr

Related

Getting https on aws assigned loadbalancer dns?

Is it possible to get https to work on the automatically assigned DNS you get from the aws load balancer when you deploy a service like so:
kind: Service
apiVersion: v1
metadata:
name: gateway-svc
spec:
selector:
app: gateway
type: LoadBalancer
ports:
- name: gateway-svc
port: 80
targetPort: 4000
I know you can use annotations and something like this:
metadata:
name: gateway-svc
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:region:<NUMBER>:certificate/c556ca29-ddbe-4983-b01b-ff7e9f2708ba
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
But registering the auto assigned dns that is something like http://<NUMBERS>-<NUMBERS>.<REGION>.elb.amazonaws.com/ is too long for the ACM.
How can I get this working?
But registering the auto assigned dns that is something like
http://-..elb.amazonaws.com/ is too long for
the ACM.
Why are you looking forward to get the ACM cert for that domain ?
ACM provides the wildcard certificate and you can that for your domain.
While for adding the entry into DNS like route53
you should checkout the external DNS
https://github.com/kubernetes-sigs/external-dns#running-locally
If you domain if xyz.com you can get the *.xyz.com and use that everywhere as it's the wild card and attach to any LB service.

Exposing a service on EKS using NGINX ingress and issues with load balancer

I am trying to set up a service and expose it externally on EKS. I have already done it on GKE pretty easily but now AWS is giving me a hard time.
My NGINX yaml looks something like that:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myapp-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
tls:
- hosts:
- app.mydomain.com
secretName: myapp-tls
rules:
- host: app.mydomain.com
http:
paths:
- path: /
backend:
serviceName: myapp-service
servicePort: 80
And then I have my domain app.mydomain.com on Google Domains pointing at the ingress external address. There is also a cert-manager service running in order to support HTTPS.
However, while basically the same setup worked completely out of the box on GKE, EKS gives me a hard time.
From what I understand it has something to do with EKS default LoadBalancer being layer 4 in comparison to Google's layer 7 (Which explains HTTPS not working) but there is also issues with redirections of the domain as it just resolves as the ingress address instead of my desired address and thus my app doesn't show up.
The domain is registered over Google Domains and I'm creating Synthetic Records (for my subdomain) that points to my ingress external address on EKS. The same scheme works perfectly fine on GKE but here it resolves the address as the ingress address instead of my domain which results in 404 on the ingress side.
I was wondering if someone could please point me to how to properly set it up? Should I give up on nginx ingress on EKS and move onto ALB? and how to properly associate the domain?
Thank you very much in advance!
Edit:
output of kubectl describe ingress myapp-ingress:
Name: myapp-ingress
Namespace: default
Address: ********************************-****************.elb.eu-west-1.amazonaws.com
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
TLS:
myapp-tls terminates app.mydomain.com
Rules:
Host Path Backends
---- ---- --------
app.mydomain.com
/ myapp-service:80 (172.31.2.238:8000)
Annotations: cert-manager.io/cluster-issuer: myapp-letsencrypt-prod
kubernetes.io/ingress.class: nginx
Events: <none>
Should I give up on nginx ingress on EKS and move onto ALB
No. NGinX ingress controllers work perfectly well on EKS. It is possible to configure them as either layer 4 or layer 7; we use it in layer 7 mode.
Can you update your question with the output of
kubectl get ingress myapp-ingress
I think your ingress path is also incorrect. Unless I'm mistaken that's just routing the root of your app, not all uris. We use the scheme
spec:
rules:
- host: service.d.tld
http:
paths:
- path: /?(.*) # <---
backend:
serviceName: my-service
servicePort: http
Are you seeing errors in the nginx ingress controller's logs? That + kubectl events are both useful for debugging purposes.
I'd disable TLS everywhere and get your service working on http, then work stepwise on getting TLS enabled on the ingress controller.
Edit: Based on your response above,
curl -H "Host: app.mydomain.com" http://<elb-address>:80
SHOULD call through to your service behind the ingress.
How is app.mydomain.com defined? Is it a CNAME to the dns entry?

AWS EKS WITH FARGATE PROFILE USING KONG INGRESS- Unable to expose port 80 to public

I deployed kong ingress controller on aws eks cluster with fargate option.
I am unable to access out application over the internet over http port.
I am keep getting -ERR_CONNECTION_TIMED_OUT in browser.
I did follow the Kong deployment as per steps given at -
https://github.com/Kong/kubernetes-ingress-controller/blob/master/docs/deployment/eks.md
Kong-proxy service is created wihtout issue.
kong-proxy service is created yet its “EXTERNAL-IP” is still showing pending.
We are able to access our local application in internal network (by logging on to running pod) via Kong-proxy CLUSTER-IP without any problem using curl.
A nlb load balancer is also created automatically in aws console when we created kong-proxy service. Its DNS name we are using to try to connect from internet.
Kindly help me understand what could be the problem.
My kong-proxy yaml is-
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-type: nlb
name: kong-proxy
namespace: kong
spec:
externalTrafficPolicy: Local
ports:
- name: proxy
port: 80
protocol: TCP
targetPort: 80
- name: proxy-ssl
port: 443
protocol: TCP
targetPort: 443
selector:
app: ingress-kong
type: LoadBalancer
I don't think it's supported now as per https://github.com/aws/containers-roadmap/issues/617

Istio installation on AWS using ELB TCP getting 504 timeout

i am fairly new to Istio - so far i have a k8s cluster (using kops) on AWS , behind ELB.
All traffic is routed via TCP.
Ingress gateway service is configured as NodePort with following config
istio-system istio-ingressgateway NodePort 100.65.241.150 <none> 15020:31038/TCP,80:30205/TCP,31400:30204/TCP,15029:31714/TCP,15030:30016/TCP,15031:32508/TCP,15032:30110/TCP,15443:32730/TCP
I have used 'demo' helm option to deploy Istio 1.4.0.
Have created gateway, VS and DR with following config -
Gateway is in istio-system namespace, VS and DR on default namespace
kind: Gateway
metadata:
name: ingress-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 31400
name: tcp
protocol: TCP
hosts:
- "*"
---
kind: VirtualService
apiVersion: networking.istio.io/v1alpha3
metadata:
name: webapp
namespace: default
spec:
hosts:
- "*"
gateways:
- ingress-gateway
http:
- route:
- destination:
host: webapp
subset: original
weight: 100
- destination:
host: webapp
subset: v2
weight: 0
---
kind: DestinationRule
apiVersion: networking.istio.io/v1alpha3
metadata:
name: webapp
namespace: default
spec:
host: webapp
subsets:
- labels:
version: original
name: original
- labels:
version: v2
name: v2
Service pods listen on port 80 - and i have tested via port forwarding - and are functioning as expected.
Although when i do curl on https://hostname externally i get a
<head><title>504 Gateway Time-out</title></head>
<body bgcolor="white">
<center><h1>504 Gateway Time-out</h1></center>
i have enabled debug logging in the envoy - but dont see anything meaningful in the logs relating to the timeout.
Any suggestion on where i might be going wrong?
Do i need to add any service annotations relating to ELB in istio ingress gateway?
Any other suggestions?
I found few things which need to be fixed
1. Connect with loadbalancer
As I mentioned in comments you need to fix your ingress-gateway to automaticly get EXTERNAL-IP addres as in istio documentation, for now your ingress is a NodePort so as far as I'm concerned it won't work, you can configure it to use with nodeport, but I assume you want the loadbalancer.
The first step would be to change istio-ingressgateway svc type from NodePort to loadbalancer and check if you get the EXTERNAL-IP.
If the EXTERNAL-IP value is set, your environment has an external load balancer that you can use for the ingress gateway. If the EXTERNAL-IP value is (or perpetually ), your environment does not provide an external load balancer for the ingress gateway. In this case, you can access the gateway using the service’s node port.
It should look like there
kubectl get svc istio-ingressgateway -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway LoadBalancer 172.21.109.129 130.211.10.121 80:31380/TCP,443:31390/TCP,31400:31400/TCP 17h
And then everything goes through the external-ip address which is 130.211.10.121
2. Fix your yamls
Note, for tcp traffic like that, we must match on the incoming port, in this case port 31400
Check this example from istio documentation
Specially this part with gateway, virtual service and destination rule.
You should add this to your virtual service.
tcp:
- match:
- port: 31400
3. Remember about namespaces.
In your example, because it's default it should work, but if you create another namespace, remember that if gateway and virtual service are in another namespace then your need to show virtual service where is the gateway.
Example here
Specially the part in virtual service
gateways:
- some-config-namespace/my-gateway
I hope it help you with your issues. Let me know if you have any more questions.

Reference ConfigMap / Secret in Kubernetes object metadata

I have a Kubernetes cluster provisioned at AWS with kops and I use route 53 mapper to configure ELB based on Service annotations and use namespaces for different environments dev, test, prod with configuration being defined in ConfigMap and Secret objects.
Environments have different hostname and TSL certificates:
kind: Service
apiVersion: v1
metadata:
name: http-proxy-service
labels:
dns: route53
annotations:
domainName: <env>.myapp.example.io
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: |-
arn:aws:acm:eu-central-1:44315247xxxxxxxxxxxxxxxx
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https
spec:
selector:
app: http-proxy
ports:
- name: https
port: 443
Is there a Kubernetes way to reference ConfigMap/Secret objects in the metadata section of the object descriptor so I can have only one file for all environments?
I am looking for pure Kubernetes solution not using any templating before sending file to API via kubecetl.
There is not.
FWIW, it seems nuts that that mapper was designed to pull cert data from annotations on a Service. Service objects are not otherwise secret.
The mapper should be able to consume cert data from a Secret that has well defined fields to indicate what domain should be wired with what cert data in front of what service.