Reference ConfigMap / Secret in Kubernetes object metadata - amazon-web-services

I have a Kubernetes cluster provisioned at AWS with kops and I use route 53 mapper to configure ELB based on Service annotations and use namespaces for different environments dev, test, prod with configuration being defined in ConfigMap and Secret objects.
Environments have different hostname and TSL certificates:
kind: Service
apiVersion: v1
metadata:
name: http-proxy-service
labels:
dns: route53
annotations:
domainName: <env>.myapp.example.io
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: |-
arn:aws:acm:eu-central-1:44315247xxxxxxxxxxxxxxxx
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https
spec:
selector:
app: http-proxy
ports:
- name: https
port: 443
Is there a Kubernetes way to reference ConfigMap/Secret objects in the metadata section of the object descriptor so I can have only one file for all environments?
I am looking for pure Kubernetes solution not using any templating before sending file to API via kubecetl.

There is not.
FWIW, it seems nuts that that mapper was designed to pull cert data from annotations on a Service. Service objects are not otherwise secret.
The mapper should be able to consume cert data from a Secret that has well defined fields to indicate what domain should be wired with what cert data in front of what service.

Related

AWS Load Balancer Controller on EKS - Sticky Sessions Not Working

I have deployed AWS Load Balancer Controller on AWS EKS. I have created k8s Ingress resource
I am deploying java web application with k8s Deployment. I want to make sure sticky session holds to make my application work.
I have read that if I set below annotation then sticky sessions will work :
alb.ingress.kubernetes.io/target-type: ip
But I am seeing ingress is routing requests to different replica each time letting login fail as session cookies are not persisting.
What am I missing here ?
alb.ingress.kubernetes.io/target-type: ip is required.
but the annotation to enable stickiness is:
alb.ingress.kubernetes.io/target-group-attributes: stickiness.enabled=true
Also you can set cookie_duration_settings.
alb.ingress.kubernetes.io/target-group-attributes: stickiness.enabled=true,stickiness.lb_cookie.duration_seconds=300
If you want to manage the stick session from the K8s level you can use the, sessionAffinity: ClientIP
kind: Service
apiVersion: v1
metadata:
name: service
spec:
selector:
app: app
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
sessionAffinity: ClientIP
sessionAffinityConfig:
clientIP:
timeoutSeconds: 10000

How to expose multiple services with TCP using nginx-ingress controller?

I have multiple deployments running of RDP application and they all are exposed with ClusterIP service. I have nginx-ingress controller in my k8s cluster and to allow tcp I have added --tcp-services-configmap flag in nginx-ingress controller deployment and also created a configmap for the same that is shown below
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: ingress-nginx
data:
3389: “demo/rdp-service1:3389”
This will expose “rdp-service1” service. And I have 10 more such services which needed to be exposed on the same port number but if I add more service in the same configmap like this
...
data
3389: “demo/rdp-service1:3389”
3389: “demo/rdp-service2:3389”
Then it will remove the previous service data and since here I have also deployed external-dns in k8s, so all the records created by ingress using host: ... will starts pointing to the deployment attached with the newly added service in configmap.
Now my final requirement is as soon as I append the rule for a newly created deployment(RDP application) in the ingress then it starts allowing the TCP connection for that, so is there any way to achieve this. Or is there any other Ingress controller available that can solve such type of use case and can also easily be integrated with external-dns ?
Note:- I am using AWS EKS Cluster and Route53 with external-dns.
Posting this answer as a community wiki to explain some of the topics in the question as well as hopefully point to the solution.
Feel free to expand/edit it.
NGINX Ingress main responsibility is to forward the HTTP/HTTPS traffic. With the addition of the tcp-services/udp-services it can also forward the TCP/UDP traffic to their respective endpoints:
Kubernetes.github.io: Ingress nginx: User guide: Exposing tcp udp services
The main issue is that the Host based routing for Ingress resource in Kubernetes is targeting specifically HTTP/HTTPS traffic and not TCP (RDP).
You could achieve a following scenario:
Ingress controller:
3389 - RDP Deployment #1
3390 - RDP Deployment #2
3391 - RDP Deployment #3
Where there would be no Host based routing. It would be more like port-forwarding.
A side note!
This setup would also depend on the ability of the LoadBalancer to allocate ports (which could be limited due to cloud provider specification)
As for possible solution which could be not so straight-forward I would take a look on following resources:
Stackoverflow.com: Questions: Nxing TCP forwarding based on hostname
Doc.traefik.io: Traefik: Routing: Routers: Configuring TCP routers
Github.com: Bolkedebruin: Rdpgw
I'd also check following links:
Aws.amazon.con: Quickstart: Architecture: Rd gateway - AWS specific
Docs.konghq.com: Kubernetes ingress controller: 1.2.X: Guides: Using tcpingress
Haproxy:
Haproxy.com: Documentation: Aloha: 12-0: Deployment guides: Remote desktop: RDP gateway
Haproxy.com: Documentation: Aloha: 10-5: Deployment guides: Remote desktop
Haproxy.com: Blog: Microsoft remote desktop services rds load balancing and protection
Actually, I really don't know why you are using that configmap.
In my knowledge, nginx-ingress-controller is routing traffic coming in the same port and routing based on host. So if you want to expose your applications on the same port, try using this:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: {{ .Chart.Name }}-ingress
namespace: your-namespace
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: your-hostname
http:
paths:
- pathType: Prefix
path: "/"
backend:
serviceName: {{ .Chart.Name }}-service
servicePort: {{ .Values.service.nodeport.port }}
Looking in your requirement, I feel that you need a LoadBalancer rather than Ingress

Redirect URLs using Google Cloud

I have a domain (example.com) already configured in Cloud DNS. With this domain I can access microservices that are in a GKE cluster. I use istio-ingressgateway IP in CloudDNS to make the association between the cluster
Now I have another domain (newexample.com) with a custom certificate for https connections. Is there a way to redirect all the requests to newexample.com to example.com? I do not want to change anything in gke/istio configuration if possible.
Each method will require some of the reconfiguration in either GKE/Istio side.
One of the solutions to this is to have a CNAME record in a Cloud DNS and a SSL certificate with Alternative Names.
With above solution you will be able to send requests to your GKE/Istio cluster with both domain names assuming correct Istio configuration.
What is CNAME?
CNAME is a Canonical Name Record or Alias Record.
A type of resource record in the Domain Name System (DNS), that specifies that one domain name is an alias of another canonical domain name.
Example of a CNAME record:
DNS name Type TTL Data
old.domain. A 60 1.2.3.4
new.domain. CNAME 60 old.domain.
Alternative Names:
A SAN or subject alternative name is a structured way to indicate all of the domain names and IP addresses that are secured by the certificate.
Enstrustdatacard.com: What is a san and how is it used
You can create SSL cerificate create to support both:
old.domain
new.domain
There are plenty options to do that for example Let's Encrypt or Cert Manager.
Example
I've created an example to show you how to do it:
Configure DNS zone in Cloud DNS
Create a basic app with a service
Create a certificate for example app
Create Istio resources to allow connections to example app
Test
Configure DNS zone in Cloud DNS
You will need to have 2 records:
A record with IP of your Ingress Gateway and name: old.domain
CNAME record pointing to old.domain with name: new.domain
Please take a look on official documentation: Cloud.google.com: DNS: Records
Create a basic app with a service
Below is example app with a service which will respond with a basic hello:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-dp
spec:
selector:
matchLabels:
app: hello-dp
replicas: 1
template:
metadata:
labels:
app: hello-dp
spec:
containers:
- name: hello
image: "gcr.io/google-samples/hello-app:2.0"
env:
- name: "PORT"
value: "50001"
---
apiVersion: v1
kind: Service
metadata:
name: hello-sv
spec:
selector:
app: hello-dp
ports:
- name: hello-port
protocol: TCP
port: 50001
targetPort: 50001
type: ClusterIP
Create a certificate for example app
As said previously, certificate with Alternative Names can be created with Let's Encrypt. I created it with:
GCE VM with Ubuntu 16.04
Open port 80
Domain name old.domain pointing to public ip address of a VM
Guide: Linode.com: Docs: Install let's encrypt to create a SSL certificate
Command to create certificate:
$ ./letsencrypt-auto certonly --standalone -dold.domain-dnew.domain
Certificate created in /etc/letsencrypt/archive/ used in creating tls secret for GKE with command:
$ kubectl create secret tlsssl-certificate--cert cert1.pem --key privkey1.pem
Please have in mind that this certificate was created only for testing purposes and I strongly advise using dedicated solution like: Cert-manager
PS: If you used this method please revert back changes in the Cloud DNS to point the Istio gateway.
Create Istio resources to allow connections to example app
Below are example Istio resources allowing connections to example app with support for HTTPS:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: hello-gw
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
credentialName: ssl-certificate
hosts:
- "old.domain"
- "new.domain"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: hello-vs
spec:
hosts:
- "old.domain"
- "new.domain"
gateways:
- hello-gw
http:
- route:
- destination:
host: hello-sv
port:
number: 50001
Please take a specific look on:
tls:
mode: SIMPLE
credentialName: ssl-certificate
This part will ensure that connection to the cluster will use HTTPS
Additionally:
hosts:
- "old.domain"
- "new.domain"
Above definition in both resources will allow only connections with specified domains.
Test
When applied all of the above resources you should be able to enter in your browser:
https://old.domain
https://new.domain
and be greeted with below message and valid SSL certificate:
Hello, world!
Version: 2.0.0
Hostname: hello-dp-5dd8b85b56-bk7zr

How to redirect http to https in Kubernetes service manifest

I had created a service with the type load balancer and I also configured SSL certificate to it, everything working fine but it's not redirecting my HTTP calls to https until I give https manually before my domain.
Here is my svc.yml
apiVersion: v1
kind: Service
metadata:
name: nginx-svc
annotations:
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "True"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
dns.alpha.kubernetes.io/external: test.example.com
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: xxxxxxxxxxxxxx
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
labels:
app: nginx
spec:
type: LoadBalancer
loadBalancerIP:
ports:
- port: 80
name: http
targetPort: 80
- port: 443
name: https
targetPort: 80
selector:
app: nginx
I believe, k8s service object does not have redirection functionality, it is designed to provide a static IP (clusterIP) to the pods who has ephemeral IP. It enables pods to have service discovery functionality in the cluster
A Kubernetes Service is an abstraction which defines a logical set of Pods and a policy by which to access them - sometimes called a micro-service. The set of Pods targeted by a Service is (usually) determined by a Label Selector (see below for why you might want a Service without a selector).
As an example, consider an image-processing backend which is running with 3 replicas. Those replicas are fungible - frontends do not care which backend they use. While the actual Pods that compose the backend set may change, the frontend clients should not need to be aware of that or keep track of the list of backends themselves. The Service abstraction enables this decoupling.
k8s service
Redirection should happen at the Ingress Level(L7) or at the load balancer(L4) of the cloud provider.

My kubernetes AWS NLB integration is not working

I am trying to deploy a service in Kubernetes available through a network load balancer. I am aware this is an alpha feature at the moment, but I am running some tests. I have a deployment definition that is working fine as is. My service definition without the nlb annotation looks something like this and is working fine:
kind: Service
apiVersion: v1
metadata:
name: service1
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
spec:
type: LoadBalancer
selector:
app: some-app
ports:
- port: 80
protocol: TCP
However, when I switch to NLB, even when the load balancer is created and configured "correctly", the target in the AWS target group always appears unhealthy and I cannot access the service via HTTP. This is the service definition:
kind: Service
apiVersion: v1
metadata:
name: service1
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
spec:
type: LoadBalancer
selector:
app: some-app
ports:
- port: 80
protocol: TCP
externalTrafficPolicy: Local
It seems there was a rule missing in the k8s nodes security group, since the NLB forwards the client IP.
I don't think NLB is the problem.
externalTrafficPolicy: Local
is not supported by kops on AWS, and there are issues with some other K8s distros that run on AWS, due to some AWS limitation.
Try changing it to
externalTrafficPolicy: Cluster
There's an issue with the source IP being that of the load balancer instead of the true external client that can be worked around by using proxy protocol annotation on the service + adding some configuration to the ingress controller.
However, there is a 2nd issue that while you can technically hack your way around it and force it to work, it's usually not worth bothering.
externalTrafficPolicy: Local
Creates a NodePort /healthz endpoint so the LB sends traffic to a subset of nodes with service endpoints instead of all worker nodes. It's broken on initial provisioning and the reconciliation loop is broken as well.
https://github.com/kubernetes/kubernetes/issues/80579
^describes the problem in more depth.
https://github.com/kubernetes/kubernetes/issues/61486
^describes a workaround to force it to work using a kops hook
but honestly, you should just stick to
externalTrafficPolicy: Cluster as it's always more stable.
There was a bug in the NLB security groups implementation. It's fixed in 1.11.7, 1.12.5, and probably the next 1.13 patch.
https://github.com/kubernetes/kubernetes/pull/68422