Kubernetes - ingress route issue - route across namespaces - amazon-web-services

I have one master and two worker node kubernetes cluster on AWS. And I have two environments (qc and prod) in the cluster and I created two namespaces.
I have the same service running on qcand prod namespaces.
I have created ingress for both namespaces.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
namespace: prod
spec:
rules:
- host: "*.qc-k8s.example.com"
http:
paths:
- path: /app
backend:
serviceName: client-svc
servicePort: 80
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
namespace: qc
spec:
rules:
- host: "*.qc-k8s.example.com"
http:
paths:
- path: /app-qc
backend:
serviceName: client-svc
servicePort: 80
I have client-svc in both qc and prod namespaces and open the nodeport 80.
Then I created ELB service and daemonset as below.
kind: Service
apiVersion: v1
metadata:
name: ingress-svc
namespace: deafult
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:ca-central-1:492276880714:certificate/xxxxxxxxxxxxxx
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
spec:
type: LoadBalancer
selector:
app: my-app
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: http
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: ingress-nginx
namespace: deafult
spec:
template:
metadata:
labels:
app: my-app
spec:
terminationGracePeriodSeconds: 60
containers:
- image: gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.6
name: ingress-nginx
imagePullPolicy: Always
ports:
- name: http
containerPort: 80
hostPort: 80
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
resources:
limits:
cpu: 100m
memory: 100Mi
requests:
cpu: 100m
memory: 100Mi
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/nginx-default-backend
When I tried to curl curl -iv https://gayan.qc-k8s.example.com/app/. Then Im getting an error.
2017/06/27 15:43:31 [error] 158#158: *981 connect() failed (111: Connection refused) while connecting to upstream, client: 209.128.50.138, server: gayan.qc-k8s.example.com, request: "GET /app/ HTTP/1.1", upstream: "http://100.82.2.47:80/app/", host: "gayan.qc-k8s.example.com"
209.128.50.138 - [209.128.50.138, 209.128.50.138] - - [27/Jun/2017:15:43:31 +0000] "GET /app/ HTTP/1.1" 500 193 "-" "curl/7.51.0" 198 0.014 100.82.2.47:80, 100.96.2.48:80 0, 193 0.001, 0.013 502, 500
If I curl curl -iv https://gayan.qc-k8s.example.com/app-qc, I'm getting the same issue.
Anyone has experienced this error previously ? any clue to resolve this issue?
Thank you

I solved this by https://github.com/kubernetes/kubernetes/issues/17088
An example, from a real document we use:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
namespace: dev-1
spec:
rules:
- host: api-gateway-dev-1.faceit.com
http:
paths:
- backend:
serviceName: api-gateway
servicePort: 80
path: /
- host: api-shop-dev-1.faceit.com
http:
paths:
- backend:
serviceName: api-shop
servicePort: 80
path: /
- host: api-search-dev-1.faceit.com
http:
paths:
- backend:
serviceName: api-search
servicePort: 8080
path: /
tls:
- hosts:
- api-gateway-dev-1.faceit.com
- api-search-dev-1.faceit.com
- api-shop-dev-1.faceit.com
secretName: faceitssl
We make one of these for each of our namespaces for each track.
Then, we have a single namespace with an Ingress Controller which runs automatically configured NGINX pods. Another AWS Load balancer points to these pods which run on a NodePort using a DaemonSet to run at most and at least one on every node in our cluster.
As such, the traffic is then routed:
Internet -> AWS ELB -> NGINX (on node) -> Pod
We keep the isolation between namespaces while using Ingresses as they were intended. It's not correct or even sensible to use one ingress to hit multiple namespaces. It just doesn't make sense, given how they are designed. The solution is to use one ingress per each namespace with a cluster-scope ingress controller which actually does the routing.

Related

AWS Ingress Controller seems to be ignoring host name rules

I am trying to deploy a frontend application to Amazon EKS. The concept is that there will be two deployments as well as two services (frontend-service and stg-frontend-service), one for production and one for staging.
On top of that, there will be an ingress ALB which will redirect traffic based on the hostname. i.e. if the hostname is www.project.io, traffic will be routed to frontend-service and if the hostname is stg-project.io, traffic will be routed to stg-frontend-service.
Here are my deployment and ingress configs
stg-frontend-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: stg-frontend-deployment
namespace: project
spec:
replicas: 3
selector:
matchLabels:
app: stg-frontend
template:
metadata:
labels:
app: stg-frontend
spec:
containers:
- name: stg-frontend
image: STAGING_IMAGE
imagePullPolicy: Always
ports:
- name: web
containerPort: 3000
imagePullSecrets:
- name: project-ecr
---
apiVersion: v1
kind: Service
metadata:
name: stg-frontend-service
namespace: project
spec:
selector:
app: stg-frontend
ports:
- protocol: TCP
port: 80
targetPort: 3000
stg-prod-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend-deployment
namespace: project
spec:
replicas: 3
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: frontend
image: PRODUCTION_IMAGE
imagePullPolicy: Always
ports:
- name: web
containerPort: 3000
imagePullSecrets:
- name: project-ecr
---
apiVersion: v1
kind: Service
metadata:
name: frontend-service
namespace: project
spec:
selector:
app: frontend
ports:
- protocol: TCP
port: 80
targetPort: 3000
ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: project-ingress
namespace: project
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
spec:
rules:
- host: www.project.io
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontend-service
port:
number: 80
- host: stg.project.io
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: stg-frontend-service
port:
number: 80
Later, I have used Route 53 to route traffic from both domain to the ALB.
+----------------+------+---------+-----------------------------------------------------+
| Record Name | Type | Routing | Value/Route traffic to |
+----------------+------+---------+-----------------------------------------------------+
| www.project.io | A | Simple | dualstack.k8s-********.us-west-1.elb.amazonaws.com. |
| stg.project.io | A | Simple | dualstack.k8s-********.us-west-1.elb.amazonaws.com. |
+----------------+------+---------+-----------------------------------------------------+
The problem is, ALB ingress is always routing traffic to the first spec rules. In the config above, the first rule is host www.project.io which refers to frontend-service. Whenever I am trying to access www.project.io or stg.project.io, it's showing me a response from frontend-service.
Later, I have switched the rules and put staging rules first, and then it was showing staging service on both domains.
I even created a dummy record like junk.project.io and pointed to the load balancer, it still worked and showed me the same response, even though junk.project.io is not included in my ingress config.
It seems to me that Ingress Config is totally ignoring what the host name is and always returning response from the first rule.
Your host and http values are defined as separate items in the list, try removing the - (hyphen) in front of the http node:
- host: www.project.io
http: # I removed the hyphen here
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontend-service
port:
number: 80
- host: stg.project.io
http: # I removed the hyphen here
paths:
- path: /
pathType: Prefix
backend:
service:
name: stg-frontend-service
port:
number: 80

nginx.ingress.kubernetes.io/rewrite-target doesn't work, Failed to load resource: the server responded with a status of 500

I have backend and fronted applications. I have tried to create one ingress for fronted where both paths will be matched (host.com/api/v1/reference/list/test1 and host.com/api/v1/reference/test2). The second one works fine, but the first give me error: Failed to load resource: the server responded with a status of 500 (). Here is my ingress configuration:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: backend-app
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$2$3
spec:
tls:
- hosts:
- host.com
secretName: tls-secret
rules:
- host: host.com
http:
paths:
- backend:
serviceName: service-backend
servicePort: 80
path: /api(/|$)(.*)
service:
apiVersion: v1
kind: Service
metadata:
name: service-backend
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
Does anyone know why my URLs are not getting rewritten and the requests are not delivered to the backend service for host.com/api/v1/reference/test2 ?
Thanks in advance!
resolved - bug inside application

ingress always showing default backend - 404

I am trying to install the Cyclos Mobile app on GCP using the demo given on youtube on this URL: https://www.youtube.com/watch?v=LmfdhOXK_ik
Everything setup perfectly as per this video but when I am trying to access the setup on browser it always showing default backend - 404. I have tried everything as per stack overflow several previous questions but still same error.
Any I idea to fix this ?
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: letsencypt-staging
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"networking.k8s.io/v1beta1","kind":"Ingress","metadata":{"annotations":{"cert-manager.io/cluster-issuer":"letsencypt-staging","kubernetes.io/ingress.class":"nginx","nginx.ingress.kubernetes.io/proxy-connect-timeout":"3600"},"name":"cyclos-ingress-nginx-https","namespace":"cyclos-name-space"},"spec":{"backend":{"serviceName":"default-http-backend","servicePort":80},"rules":[{"host":"ip-address.xip.io","http":{"paths":[{"backend":{"serviceName":"cyclos-app-stateful","servicePort":80},"path":"/*"}]}}],"tls":[{"hosts":["ip-address.xip.io"],"secretName":"ip-address.xip.io-tls-secret"}]}}
kubernetes.io/ingress.class: nginx
creationTimestamp: "2020-09-29T07:00:01Z"
generation: 11
name: cyclos-ingress-nginx-https
namespace: cyclos-name-space
resourceVersion: "643221534"
selfLink: /apis/extensions/v1beta1/namespaces/cyclos-name-space/ingresses/cyclos-ingress-nginx-https
uid: uid
spec:
backend:
serviceName: default-http-backend
servicePort: 80
rules:
- host: ip-address.xip.io
http:
paths:
- backend:
serviceName: cyclos-app-stateful
servicePort: 80
path: /*
tls:
- hosts:
- ip-address.xip.io
secretName: ip-address.xip.io-tls-secret
status:
loadBalancer:
ingress:
- ip: IP

istio: ingress with grpc and http

I have a service listening on two ports; one is http, the other is grpc.
I would like to set up an ingress that can route to both these port, with the same host.
The load balancer would redirect to the http port if http/1.1 is used, and to the grpc port if h2 is used.
Is there a way to do that with istio ?
I made a hello world demonstrating what I am trying to achieve :
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello-world
namespace: dev
spec:
replicas: 1
template:
metadata:
annotations:
alpha.istio.io/sidecar: injected
pod.beta.kubernetes.io/init-containers: '[{"args":["-p","15001","-u","1337","-i","172.20.0.0/16"],"image":"docker.io/istio/init:0.1","imagePullPolicy":"Always","name":"init","securityContext":{"capabilities":{"add":["NET_ADMIN"]}}}]'
labels:
app: hello-world
spec:
containers:
- name: grpc-server
image: aguilbau/hello-world-grpc:latest
ports:
- name: grpc
containerPort: 50051
- name: http-server
image: nginx:1.7.9
ports:
- name: http
containerPort: 80
- name: istio-proxy
args:
- proxy
- sidecar
- -v
- "2"
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
image: docker.io/istio/proxy:0.1
imagePullPolicy: Always
resources: {}
securityContext:
runAsUser: 1337
---
apiVersion: v1
kind: Service
metadata:
name: hello-world
namespace: dev
spec:
ports:
- name: grpc
port: 50051
- name: http
port: 80
selector:
app: hello-world
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: hello-world-http
namespace: dev
annotations:
kubernetes.io/ingress.class: "istio"
spec:
rules:
- host: hello-world
http:
paths:
- backend:
serviceName: hello-world
servicePort: 80
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: hello-world-grpc
namespace: dev
annotations:
kubernetes.io/ingress.class: "istio"
spec:
rules:
- host: hello-world
http:
paths:
- backend:
serviceName: hello-world
servicePort: 50051
---
I'm a bit late to the party, but for those of you stumbling on this post, I think you can do this with very little difficulty. I'm going to assume you have istio installed on a kubernetes cluster and are happy using the default istio-ingressgateway:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello-world
namespace: dev
spec:
replicas: 1
template:
metadata:
annotations:
alpha.istio.io/sidecar: injected
pod.beta.kubernetes.io/init-containers: '[{"args":["-p","15001","-u","1337","-i","172.20.0.0/16"],"image":"docker.io/istio/init:0.1","imagePullPolicy":"Always","name":"init","securityContext":{"capabilities":{"add":["NET_ADMIN"]}}}]'
labels:
app: hello-world
spec:
containers:
- name: grpc-server
image: aguilbau/hello-world-grpc:latest
ports:
- name: grpc
containerPort: 50051
- name: http-server
image: nginx:1.7.9
ports:
- name: http
containerPort: 80
- name: istio-proxy
args:
- proxy
- sidecar
- -v
- "2"
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
image: docker.io/istio/proxy:0.1
imagePullPolicy: Always
resources: {}
securityContext:
runAsUser: 1337
---
apiVersion: v1
kind: Service
metadata:
name: hello-world
namespace: dev
spec:
ports:
- name: grpc
port: 50051
- name: http
port: 80
selector:
app: hello-world
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: hello-world-istio-gate
namespace: dev
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
- port:
number: 50051
name: grpc
protocol: GRPC
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: hello-world-istio-vsvc
namespace: dev
spec:
hosts:
- "*"
gateways:
- hello-world-istio-gate
http:
- match:
- port: 80
route:
- destination:
host: hello-world
port:
number: 80
tcp:
- match:
- port: 50051
route:
- destination:
host: hello-world
port:
number: 50051
The above configuration omits your two Ingresses, and instead includes:
Your deployment
Your service
An istio gateway
An istio virtualservice
There is an important extra piece not shown, and I alluded to it earlier when talking about using the default ingressgateway. The following line found in "hello-world-istio-gateway" gives a clue:
istio: ingressgateway
This refers to a pod in the 'istio-system' namespace that is usually installed by default called 'istio-ingressgateway' - and this pod is exposed by a service also called 'istio-ingressgateway.' You will need to open up ports on the 'istio-ingressgateway' service.
As an example, I edited my (default) ingressgateway and added a port opening for HTTP and GRPC. The result is the following (edited for length) yaml code:
dampersand#kubetest1:~/k8s$ kubectl get service istio-ingressgateway -n istio-system -o yaml
apiVersion: v1
kind: Service
metadata:
<omitted for length>
labels:
app: istio-ingressgateway
chart: gateways-1.0.3
heritage: Tiller
istio: ingressgateway
release: istio
name: istio-ingressgateway
namespace: istio-system
<omitted for length>
ports:
- name: http2
nodePort: 31380
port: 80
protocol: TCP
targetPort: 80
<omitted for length>
- name: grpc
nodePort: 30000
port: 50051
protocol: TCP
targetPort: 50051
selector:
app: istio-ingressgateway
istio: ingressgateway
type: NodePort
The easiest way to make the above change (for testing purposes) is to use:
kubectl edit svc -n istio-system istio-ingressgateway
For production purposes, it's probably better to edit your helm chart or your istio.yaml file or whatever you initially used to set up the ingressgateway.
As a quick aside, note that my test cluster has istio-ingressgateway set up as a NodePort, so what the above yaml file says is that my cluster is port forwarding 31380 -> 80 and 30000 -> 50051. You may (probably) have istio-ingressgateway set up as a LoadBalancer, which will be different... so plan accordingly.
Finally, the following blog post is some REALLY excellent background reading for the tools I've outlined in this post! https://blog.jayway.com/2018/10/22/understanding-istio-ingress-gateway-in-kubernetes/
You may be able to do something like that if you move the grpc-server and http-server containers into different pods with unique labels (i.e., different versions of the service, so to speak) and then using Istio route rules, behind the Ingress, split the traffic. A rule with a match for header Upgrade: h2 could send traffic to the grpc version and a default rule would send the rest of the traffic to http one.

Kubernetes 1.4 SSL Termination on AWS

I have 6 HTTP micro-services. Currently they run in a crazy bash/custom deploy tools setup (dokku, mup).
I dockerized them and moved to kubernetes on AWS (setup with kop). The last piece is converting my nginx config.
I'd like
All 6 to have SSL termination (not in the docker image)
4 need websockets and client IP session affinity (Meteor, Socket.io)
5 need http->https forwarding
1 serves the same content on http and https
I did 1. SSL termination setting the service type to LoadBalancer and using AWS specific annotations. This created AWS load balancers, but this seems like a dead end for the other requirements.
I looked at Ingress, but don't see how to do it on AWS. Will this Ingress Controller work on AWS?
Do I need an nginx controller in each pod? This looked interesting, but I'm not sure how recent/relevant it is.
I'm not sure what direction to start in. What will work?
Mike
You should be able to use the nginx ingress controller to accomplish this.
SSL termination
Websocket support
http->https
Turn off the http->https redirect, as described in the link above
The README walks you through how to set it up, and there are plenty of examples.
The basic pieces you need to make this work are:
A default backend that will respond with 404 when there is no matching Ingress rule
The nginx ingress controller which will monitor your ingress rules and rewrite/reload nginx.conf whenever they change.
One or more ingress rules that describe how traffic should be routed to your services.
The end result is that you will have a single ELB that corresponds to your nginx ingress controller service, which in turn is responsible for routing to your individual services according to the ingress rules specified.
There may be a better way to do this. I wrote this answer because I asked the question. It's the best I could come up with Pixel Elephant's doc links above.
The default-http-backend is very useful for debugging. +1
Ingress
this creates an endpoint on the node's IP address, which can change depending on where the Ingress Container is running
note the configmap at the bottom. Configured per environment.
(markdown placeholder because no ```)
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: "nginx"
name: all-ingress
spec:
tls:
- hosts:
- admin-stage.example.io
secretName: tls-secret
rules:
- host: admin-stage.example.io
http:
paths:
- backend:
serviceName: admin
servicePort: http-port
path: /
---
apiVersion: v1
data:
enable-sticky-sessions: "true"
proxy-read-timeout: "7200"
proxy-send-imeout: "7200"
kind: ConfigMap
metadata:
name: nginx-load-balancer-conf
App Service and Deployment
the service port needs to be named, or you may get "upstream default-admin-80 does not have any active endpoints. Using default backend"
(markdown placeholder because no ```)
apiVersion: v1
kind: Service
metadata:
name: admin
spec:
ports:
- name: http-port
port: 80
protocol: TCP
targetPort: http-port
selector:
app: admin
sessionAffinity: ClientIP
type: ClusterIP
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: admin
spec:
replicas: 1
template:
metadata:
labels:
app: admin
name: admin
spec:
containers:
- image: example/admin:latest
name: admin
ports:
- containerPort: 80
name: http-port
resources:
requests:
cpu: 500m
memory: 1000Mi
volumeMounts:
- mountPath: /etc/env-volume
name: config
readOnly: true
imagePullSecrets:
- name: cloud.docker.com-pull
volumes:
- name: config
secret:
defaultMode: 420
items:
- key: admin.sh
mode: 256
path: env.sh
- key: settings.json
mode: 256
path: settings.json
secretName: env-secret
Ingress Nginx Docker Image
note default-ssl-certificate at bottom
logging is great -v below
note the Service will create an ELB on AWS which can be used to configure DNS.
(markdown placeholder because no ```)
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress-service
spec:
ports:
- name: http-port
port: 80
protocol: TCP
targetPort: http-port
- name: https-port
port: 443
protocol: TCP
targetPort: https-port
selector:
app: nginx-ingress-service
sessionAffinity: None
type: LoadBalancer
---
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-ingress-controller
labels:
k8s-app: nginx-ingress-lb
spec:
replicas: 1
selector:
k8s-app: nginx-ingress-lb
template:
metadata:
labels:
k8s-app: nginx-ingress-lb
name: nginx-ingress-lb
spec:
terminationGracePeriodSeconds: 60
containers:
- image: gcr.io/google_containers/nginx-ingress-controller:0.8.3
name: nginx-ingress-lb
imagePullPolicy: Always
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 1
# use downward API
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http-port
containerPort: 80
hostPort: 80
- name: https-port
containerPort: 443
hostPort: 443
# we expose 18080 to access nginx stats in url /nginx-status
# this is optional
- containerPort: 18080
hostPort: 18080
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
- --default-ssl-certificate=default/tls-secret
- --nginx-configmap=$(POD_NAMESPACE)/nginx-load-balancer-conf
- --v=2
Default Backend (this is copy/paste from .yaml file)
apiVersion: v1
kind: Service
metadata:
name: default-http-backend
labels:
k8s-app: default-http-backend
spec:
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
selector:
k8s-app: default-http-backend
---
apiVersion: v1
kind: ReplicationController
metadata:
name: default-http-backend
spec:
replicas: 1
selector:
k8s-app: default-http-backend
template:
metadata:
labels:
k8s-app: default-http-backend
spec:
terminationGracePeriodSeconds: 60
containers:
- name: default-http-backend
# Any image is permissable as long as:
# 1. It serves a 404 page at /
# 2. It serves 200 on a /healthz endpoint
image: gcr.io/google_containers/defaultbackend:1.0
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
ports:
- containerPort: 8080
resources:
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
This config uses three secrets:
tls-secret - 3 files: tls.key, tls.crt, dhparam.pem
env-secret - 2 files: admin.sh and settings.json. Container has start script to setup environment.
cloud.docker.com-pull