How to access a MinIO instance within a Kubernetes cluster? - web-services

I've deployed a MinIO server on Kubernetes with cdk8s, used minio1 as serviceId and exposed port 9000.
My expectations were that I could access it using http://minio1:9000, but my MinIO server is unreachable from both Prometheus and other instances in my namespace (Loki, Mimir etc...). Is there a specific configuration I missed to enable access within the network? The server starts without error, so it sounds a networking issue.
I'm starting the server this way:
command: ["minio"],
args: [
"server",
"/data",
"--address",
`:9000`,
"--console-address",
`:9001`,
],
Patched the K8S configuration to expose both 9000 and 9001
const d = ApiObject.of(minioDeployment);
//Create the empty port list
d.addJsonPatch(JsonPatch.add("/spec/template/spec/containers/0/ports", []));
//add port for console
d.addJsonPatch(
JsonPatch.add("/spec/template/spec/containers/0/ports/0", {
name: "console",
containerPort: 9001,
})
);
// add port for bucket
d.addJsonPatch(
JsonPatch.replace("/spec/template/spec/containers/0/ports/1", {
name: "bucket",
containerPort: 9000,
})
);
Can it be related to the multi-port configuration? Or is there a way to explicitly define the hostname as service id to make it accessible in the Kubernetes namespace?
Here's my service definition generated by cdk8s:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
use-default-egress-policy: "true"
name: minio1
namespace: ns-monitoring
spec:
minReadySeconds: 0
progressDeadlineSeconds: 600
replicas: 1
selector:
matchExpressions: []
matchLabels:
cdk8s.deployment: monitoring-stack-minio1-minio1-deployment-c8e6c44f
strategy:
rollingUpdate:
maxSurge: 50%
maxUnavailable: 50%
type: RollingUpdate
template:
metadata:
labels:
cdk8s.deployment: monitoring-stack-minio1-minio1-deployment-c8e6c44f
spec:
automountServiceAccountToken: true
containers:
- args:
- server
- /data/minio/
- --address
- :9000
- --console-address
- :9001
command:
- minio
env:
- name: MINIO_ROOT_USER
value: userminio
- name: MINIO_ROOT_PASSWORD
value: XXXXXXXXXXXXXX
- name: MINIO_BROWSER
value: "on"
- name: MINIO_PROMETHEUS_AUTH_TYPE
value: public
image: minio/minio
imagePullPolicy: Always
name: minio1-docker
ports:
- containerPort: 9001
name: console
- containerPort: 9000
name: bucket
securityContext:
privileged: false
readOnlyRootFilesystem: false
runAsNonRoot: false
volumeMounts:
- mountPath: /data
name: data
dnsConfig:
nameservers: []
options: []
searches: []
dnsPolicy: ClusterFirst
hostAliases: []
initContainers: []
restartPolicy: Always
securityContext:
fsGroupChangePolicy: Always
runAsNonRoot: false
sysctls: []
setHostnameAsFQDN: false
volumes:
- emptyDir: {}
name: data
---
apiVersion: v1
kind: Service
metadata:
labels:
use-default-egress-policy: "true"
name: minio1
namespace: ns-monitoring
spec:
externalIPs: []
ports:
- port: 443
targetPort: 9001
selector:
cdk8s.deployment: stack-minio1-minio1-deployment-c8e6c44f
type: ClusterIP

As mentioned in the document:
List of ports to expose from the container. Not specifying a port here
DOES NOT prevent that port from being exposed. Any port which is
listening on the default "0.0.0.0" address inside a container will be
accessible from the network
.
As #user2311578 suggested Exposing it on the container level does not make it available automatically on the service. You must specify it there when you want to access it through the service (and its virtual IP).
Also check the GitHub link for more information.

Related

Containers in pod won't talk to each other in Kubernetes

I have three containers in a pod: nginx, redis, custom django app. It seems like none of them talk to each other with kubernetes. In docker compose they do but I can't use docker compose in production.
The django container gets this error:
[2022-06-20 21:45:49,420: ERROR/MainProcess] consumer: Cannot connect to redis://redis:6379/0: Error 111 connecting to redis:6379. Connection refused..
Trying again in 32.00 seconds... (16/100)
and the nginx container starts but never shows any traffic. Trying to connect to localhost:8000 gets no reply.
Any idea whats wrong with my yml file?
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
creationTimestamp: null
name: djangonetwork
spec:
ingress:
- from:
- podSelector:
matchLabels:
io.kompose.network/djangonetwork: "true"
podSelector:
matchLabels:
io.kompose.network/djangonetwork: "true"
---
apiVersion: v1
data:
DB_HOST: db
DB_NAME: django_db
DB_PASSWORD: password
DB_PORT: "5432"
DB_USER: user
kind: ConfigMap
metadata:
creationTimestamp: null
labels:
io.kompose.service: web
name: envs--django
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
io.kompose.service: web
name: web
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: web
strategy:
type: Recreate
template:
metadata:
labels:
io.kompose.network/djangonetwork: "true"
io.kompose.service: web
spec:
containers:
- image: nginx:alpine
name: nginxcontainer
ports:
- containerPort: 8000
- image: redis:alpine
name: rediscontainer
ports:
- containerPort: 6379
resources: {}
- env:
- name: DB_HOST
valueFrom:
configMapKeyRef:
key: DB_HOST
name: envs--django
- name: DB_NAME
valueFrom:
configMapKeyRef:
key: DB_NAME
name: envs--django
- name: DB_PASSWORD
valueFrom:
configMapKeyRef:
key: DB_PASSWORD
name: envs--django
- name: DB_PORT
valueFrom:
configMapKeyRef:
key: DB_PORT
name: envs--django
- name: DB_USER
valueFrom:
configMapKeyRef:
key: DB_USER
name: envs--django
image: localhost:5000/integration/web:latest
name: djangocontainer
ports:
- containerPort: 8000
resources: {}
restartPolicy: Always
status: {}
---
apiVersion: v1
kind: Service
metadata:
labels:
io.kompose.service: web
name: web
spec:
ports:
- name: "8000"
port: 8000
targetPort: 8000
selector:
io.kompose.service: web
You've put all three containers into a single Pod. That's usually not the preferred approach: it means you can't restart one of the containers without restarting all of them (any update to your application code requires discarding your Redis cache) and you can't individually scale the component parts (if you need five replicas of your application, do you also need five reverse proxies and can you usefully use five Redises?).
Instead, a preferred approach is to split these into three separate Deployments (or possibly use a StatefulSet for Redis with persistence). Each has a corresponding Service, and then those Service names can be used as DNS names.
A very minimal example for Redis could look like:
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
spec:
replicas: 1
template:
metadata:
labels:
service: web
component: redis
spec:
containers:
- name: redis
image: redis
ports:
- name: redis
containerPort: 6379
---
apiVersion: v1
kind: Service
metadata:
name: redis # <-- this name will be a DNS name
spec:
selector: # matches the template: { metadata: { labels: } }
service: web
component: redis
ports:
- name: redis
port: 6379
targetPort: redis # matches a containerPorts: [{ name: }]
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
spec:
...
env:
- name: REDIS_HOST
value: redis # matches the Service
If all three parts are in the same Pod, then the Service can't really distinguish which part it's talking to. In principle, between these containers, they share a network namespace and need to talk to each other as localhost; the containers: [{ name: }] have no practical effect.

Quorum nodes run fine with docker-compose, but crash when deployed on kubernetes

Following the steps outlined here, I created a basic Quorum network with 4 nodes and IBFT consensus. I then created a docker image for each of the nodes, copying the contents of each node's directory on to the image. The image was created from the official quorumengineering/quorum image, and when started as a container it executes the geth command. An example Dockerfile follows (different nodes have different rpcports/ports):
FROM quorumengineering/quorum
WORKDIR /opt/node
COPY . /opt/node
ENTRYPOINT []
CMD PRIVATE_CONFIG=ignore nohup geth --datadir data --nodiscover --istanbul.blockperiod 5 --syncmode full --mine --minerthreads 1 --verbosity 5 --networkid 10 --rpc --rpcaddr 0.0.0.0 --rpcport 22001 --rpcapi admin,db,eth,debug,miner,net,shh,txpool,personal,web3,quorum,istanbul --rpcvhosts="*" --emitcheckpoints --port 30304
I then made a docker-compose file to run the images.
version: '2'
volumes:
qnode0-data:
qnode1-data:
qnode2-data:
qnode3-data:
services:
qnode0:
container_name: qnode0
image: <myDockerHub>/qnode0
ports:
- 22000:22000
- 30303:30303
volumes:
- qnode0-data:/opt/node
qnode1:
container_name: qnode1
image: <myDockerHub>/qnode1
ports:
- 22001:22001
- 30304:30304
volumes:
- qnode1-data:/opt/node
qnode2:
container_name: qnode2
image: <myDockerHub>/qnode2
ports:
- 22002:22002
- 30305:30305
volumes:
- qnode2-data:/opt/node
qnode3:
container_name: qnode3
image: <myDockerHub>/qnode3
ports:
- 22003:22003
- 30306:30306
volumes:
- qnode3-data:/opt/node
When running these images locally with docker-compose, the nodes start and I can even see the created blocks via a blockchain explorer. However, when I try to run this in a kubernetes cluster, either locally with minikube, or on AWS, the nodes do not start but rather crash.
To deploy on kubernetes I made the following three yaml files for each node (12 files in total):
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: qnode0
name: qnode0
spec:
replicas: 1
selector:
matchLabels:
app: qnode0
strategy:
type: Recreate
template:
metadata:
labels:
app: qnode0
spec:
containers:
- image: <myDockerHub>/qnode0
imagePullPolicy: ""
name: qnode0
ports:
- containerPort: 22000
- containerPort: 30303
resources: {}
volumeMounts:
- mountPath: /opt/node
name: qnode0-data
restartPolicy: Always
serviceAccountName: ""
volumes:
- name: qnode0-data
persistentVolumeClaim:
claimName: qnode0-data
status: {}
service.yaml
apiVersion: v1
kind: Service
metadata:
name: qnode0-service
spec:
selector:
app: qnode0
ports:
- name: rpcport
protocol: TCP
port: 22000
targetPort: 22000
- name: netlistenport
protocol: TCP
port: 30303
targetPort: 30303
persistentvolumeclaim.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app: qnode0-data
name: qnode0-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
status: {}
When trying to run on a kubernetes cluster, each node runs into this error:
ERROR[] Cannot start mining without etherbase err="etherbase must be explicitly specified"
Fatal: Failed to start mining: etherbase missing: etherbase must be explicitly specified
which does not occur when running locally with docker-compose. After examining the logs, I saw there is a difference between how the nodes startup locally with docker-compose and on kubernetes, which is the following lines:
locally I see the following lines in each node's output:
INFO [] Initialising Ethereum protocol name=istanbul versions="[99 64]" network=10 dbversion=7
...
DEBUG[] InProc registered namespace=istanbul
on kubernetes (either in minikube or AWS) I see these lines differently:
INFO [] Initialising Ethereum protocol name=eth versions="[64 63]" network=10 dbversion=7
...
DEBUG[] IPC registered namespace=eth
DEBUG[] IPC registered namespace=ethash
Why is this happening? What is the significance of name=istanbul/eth? My common sense logic says that the error happens because of the use of name=eth, instead of name=istanbul. But I don't know the significance of this, and more importantly, I don't know what it is I did to inadvertently affect the kubernetes deployment.
Any ideas how to fix this?
EDIT
I tried to address what David Maze mentioned in his comment, i.e. that the node directory gets overwritten, so I created a new directory in the image with
RUN mkdir /opt/nodedata/
and used that to mount volumes in kubernetes. I also used StatefulSets instead of Deployments in kubernetes. The relevant yaml follows:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: qnode0
spec:
serviceName: qnode0
replicas: 1
selector:
matchLabels:
app: qnode0
template:
metadata:
labels:
app: qnode0
spec:
containers:
- image: <myDockerHub>/qnode0
imagePullPolicy: ""
name: qnode0
ports:
- protocol: TCP
containerPort: 22000
- protocol: TCP
containerPort: 30303
volumeMounts:
- mountPath: /opt/nodedata
name: qnode0-data
restartPolicy: Always
serviceAccountName: ""
volumes:
- name: qnode0-data
persistentVolumeClaim:
claimName: qnode0-data
Changing the volume mount immediately produced the correct behaviour of
INFO [] Initialising Ethereum protocol name=istanbul
However, I had networking issues, which I solved by using the environment variables that kubernetes sets for each service, which include the IP each service is running at, e.g.:
QNODE0_PORT_30303_TCP_ADDR=172.20.115.164
I also changed my kubernetes services a little, as follows:
apiVersion: v1
kind: Service
metadata:
labels:
app: qnode0
name: qnode0
spec:
ports:
- name: "22000"
port: 22000
targetPort: 22000
- name: "30303"
port: 30303
targetPort: 30303
selector:
app: qnode0
Using the environment variables to properly initialise the quorum files solved the networking problem.
However, when I delete my stateful sets and my services with:
kubectl delete -f <my_statefulset_and_service_yamls>
and then apply them again:
kubectl apply -f <my_statefulset_and_service_yamls>
quorum starts from scratch, i.e. it does not continue block creation from where it stopped but starts from 1 again, as follows:
Inserted new block number=1 hash=1c99d0…fe59bb
So the state of the blockchain is not saved, as was my initial fear. What should I do to address this?

GKE Ingress with NEGs: backend healthcheck doesn't pass

I have created GKE Ingress as follows:
apiVersion: cloud.google.com/v1beta1 #tried cloud.google.com/v1 as well
kind: BackendConfig
metadata:
name: backend-config
namespace: prod
spec:
healthCheck:
checkIntervalSec: 30
port: 8080
type: HTTP #case-sensitive
requestPath: /healthcheck
connectionDraining:
drainingTimeoutSec: 60
---
apiVersion: v1
kind: Service
metadata:
name: web-engine-service
namespace: prod
annotations:
cloud.google.com/neg: '{"ingress": true}' # Creates a NEG after an Ingress is created.
cloud.google.com/backend-config: '{"ports": {"web-engine-port":"backend-config"}}' #https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#associating_backendconfig_with_your_ingress
spec:
selector:
app: web-engine-pod
ports:
- name: web-engine-port
protocol: TCP
port: 8080
targetPort: 5000
---
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
labels:
app: web-engine-deployment
environment: prod
name: web-engine-deployment
namespace: prod
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: web-engine-pod
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
name: web-engine-pod
labels:
app: web-engine-pod
environment: prod
spec:
containers:
- image: my-image:my-tag
imagePullPolicy: Always
name: web-engine-1
resources: {}
ports:
- name: flask-port
containerPort: 5000
protocol: TCP
readinessProbe:
httpGet:
path: /healthcheck
port: 5000
initialDelaySeconds: 30
periodSeconds: 100
restartPolicy: Always
terminationGracePeriodSeconds: 30
---
apiVersion: networking.gke.io/v1beta2
kind: ManagedCertificate
metadata:
name: my-certificate
namespace: prod
spec:
domains:
- api.mydomain.com #https://cloud.google.com/load-balancing/docs/ssl-certificates/google-managed-certs#renewal
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: prod-ingress
namespace: prod
annotations:
kubernetes.io/ingress.allow-http: "false"
kubernetes.io/ingress.global-static-ip-name: load-balancer-ip
networking.gke.io/managed-certificates: my-certificate
spec:
rules:
- http:
paths:
- path: /model
backend:
serviceName: web-engine-service
servicePort: 8080
I don't know what I'm doing wrong because my heathchecks are not Ok. And based on the perimeter logging I added to the app, nothing is even trying to hit that pod.
I've tried BackendConfig for both 8080 and 5000.
By the way, it's not 100% clear based on the docs if Load Balancer should be configured to targetPorts of corresponding Pods or Services.
The healthcheck is registered with HTTP Load Balancer and Compute Engine:
It seems that something is not right with the Backend service IP.
The corresponding backend service configuration:
$ gcloud compute backend-services describe k8s1-85ef2f9a-prod-web-engine-service-8080-b938a707
...
affinityCookieTtlSec: 0
backends:
- balancingMode: RATE
capacityScaler: 1.0
group: https://www.googleapis.com/compute/v1/projects/wnd/zones/europe-west3-a/networkEndpointGroups/k8s1-85ef2f9a-prod-web-engine-service-8080-b938a707
maxRatePerEndpoint: 1.0
connectionDraining:
drainingTimeoutSec: 60
creationTimestamp: '2020-08-01T11:14:06.096-07:00'
description: '{"kubernetes.io/service-name":"prod/web-engine-service","kubernetes.io/service-port":"8080","x-features":["NEG"]}'
enableCDN: false
fingerprint: 5Vkqvg9lcRg=
healthChecks:
- https://www.googleapis.com/compute/v1/projects/wnd/global/healthChecks/k8s1-85ef2f9a-prod-web-engine-service-8080-b938a707
id: '2233674285070159361'
kind: compute#backendService
loadBalancingScheme: EXTERNAL
logConfig:
enable: true
sampleRate: 1.0
name: k8s1-85ef2f9a-prod-web-engine-service-8080-b938a707
port: 80
portName: port0
protocol: HTTP
selfLink: https://www.googleapis.com/compute/v1/projects/wnd/global/backendServices/k8s1-85ef2f9a-prod-web-engine-service-8080-b938a707
sessionAffinity: NONE
timeoutSec: 30
(port 80 looks really suspicious but I thought maybe it's just left there as default and is not in use when NEGs are configured).
Figured it out. By default, even the latest GKE clusters are created with no IP Alias support. It's also called VPC-native. I didn't even bother to check that initially because:
NEGs are supported out of the box, and what's more they seem to be default with no need for explicit annotation when used on the GKE version I had(1.17.8-gke.17). It doesn't make sense to not enable IP Aliases by default then because it basically means that cluster is in a non-functional state by default.
I didn't check VPC-Native support initially, because this name for the feature is simply misleading. I had extensive prior experience with AWS and my faulty assumption was that VPC-Native is like EC2-VPC, as opposed to EC2-Classic, which is legacy.

None persistent Prometheus metrics on Kubernetes

I'm collecting Prometheus metrics from a uwsgi application hosted on Kubernetes, the metrics are not retained after the pods are deleted. Prometheus server is hosted on the same kubernetes cluster and I have assigned a persistent storage to it.
How do I retain the metrics from the pods even after they deleted?
The Prometheus deployment yaml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: prometheus
namespace: default
spec:
replicas: 1
template:
metadata:
labels:
app: prometheus
spec:
containers:
- name: prometheus
image: prom/prometheus
args:
- "--config.file=/etc/prometheus/prometheus.yml"
- "--storage.tsdb.path=/prometheus/"
- "--storage.tsdb.retention=2200h"
ports:
- containerPort: 9090
volumeMounts:
- name: prometheus-config-volume
mountPath: /etc/prometheus/
- name: prometheus-storage-volume
mountPath: /prometheus/
volumes:
- name: prometheus-config-volume
configMap:
defaultMode: 420
name: prometheus-server-conf
- name: prometheus-storage-volume
persistentVolumeClaim:
claimName: azurefile
---
apiVersion: v1
kind: Service
metadata:
labels:
app: prometheus
name: prometheus
spec:
type: LoadBalancer
loadBalancerIP: ...
ports:
- port: 80
protocol: TCP
targetPort: 9090
selector:
app: prometheus
Application deployment yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-app
spec:
replicas: 2
selector:
matchLabels:
app: api-app
template:
metadata:
labels:
app: api-app
spec:
containers:
- name: nginx
image: nginx
lifecycle:
preStop:
exec:
command: ["/usr/sbin/nginx","-s","quit"]
ports:
- containerPort: 80
protocol: TCP
resources:
limits:
cpu: 50m
memory: 100Mi
requests:
cpu: 10m
memory: 50Mi
volumeMounts:
- name: app-api
mountPath: /var/run/app
- name: nginx-conf
mountPath: /etc/nginx/conf.d
- name: api-app
image: azurecr.io/app_api_se:opencv
workingDir: /app
command: ["/usr/local/bin/uwsgi"]
args:
- "--die-on-term"
- "--manage-script-name"
- "--mount=/=api:app_dispatch"
- "--socket=/var/run/app/uwsgi.sock"
- "--chmod-socket=777"
- "--pyargv=se"
- "--metrics-dir=/storage"
- "--metrics-dir-restore"
resources:
requests:
cpu: 150m
memory: 1Gi
volumeMounts:
- name: app-api
mountPath: /var/run/app
- name: storage
mountPath: /storage
volumes:
- name: app-api
emptyDir: {}
- name: storage
persistentVolumeClaim:
claimName: app-storage
- name: nginx-conf
configMap:
name: app
tolerations:
- key: "sku"
operator: "Equal"
value: "test"
effect: "NoSchedule"
---
apiVersion: v1
kind: Service
metadata:
labels:
app: api-app
name: api-app
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: api-app
Your issue is with the wrong type of controller used to deploy Prometheus. The Deployment controller is wrong choice in this case (it's meant for Stateless applications, that don't need to maintain any persistence identifiers between Pods rescheduling - like persistence data).
You should switch to StatefulSet kind*, if you require persistence of data (metrics scraped by Prometheus) across Pod (re)scheduling.
*This is how Prometheus is deployed by default with prometheus-operator.
With this configuration for a volume, it will be removed when you release a pod. You are basically looking for a PersistentVolumne, documentation and example.
Also check, PersistentVolumeClaim.

Kubernetes 1.4 SSL Termination on AWS

I have 6 HTTP micro-services. Currently they run in a crazy bash/custom deploy tools setup (dokku, mup).
I dockerized them and moved to kubernetes on AWS (setup with kop). The last piece is converting my nginx config.
I'd like
All 6 to have SSL termination (not in the docker image)
4 need websockets and client IP session affinity (Meteor, Socket.io)
5 need http->https forwarding
1 serves the same content on http and https
I did 1. SSL termination setting the service type to LoadBalancer and using AWS specific annotations. This created AWS load balancers, but this seems like a dead end for the other requirements.
I looked at Ingress, but don't see how to do it on AWS. Will this Ingress Controller work on AWS?
Do I need an nginx controller in each pod? This looked interesting, but I'm not sure how recent/relevant it is.
I'm not sure what direction to start in. What will work?
Mike
You should be able to use the nginx ingress controller to accomplish this.
SSL termination
Websocket support
http->https
Turn off the http->https redirect, as described in the link above
The README walks you through how to set it up, and there are plenty of examples.
The basic pieces you need to make this work are:
A default backend that will respond with 404 when there is no matching Ingress rule
The nginx ingress controller which will monitor your ingress rules and rewrite/reload nginx.conf whenever they change.
One or more ingress rules that describe how traffic should be routed to your services.
The end result is that you will have a single ELB that corresponds to your nginx ingress controller service, which in turn is responsible for routing to your individual services according to the ingress rules specified.
There may be a better way to do this. I wrote this answer because I asked the question. It's the best I could come up with Pixel Elephant's doc links above.
The default-http-backend is very useful for debugging. +1
Ingress
this creates an endpoint on the node's IP address, which can change depending on where the Ingress Container is running
note the configmap at the bottom. Configured per environment.
(markdown placeholder because no ```)
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: "nginx"
name: all-ingress
spec:
tls:
- hosts:
- admin-stage.example.io
secretName: tls-secret
rules:
- host: admin-stage.example.io
http:
paths:
- backend:
serviceName: admin
servicePort: http-port
path: /
---
apiVersion: v1
data:
enable-sticky-sessions: "true"
proxy-read-timeout: "7200"
proxy-send-imeout: "7200"
kind: ConfigMap
metadata:
name: nginx-load-balancer-conf
App Service and Deployment
the service port needs to be named, or you may get "upstream default-admin-80 does not have any active endpoints. Using default backend"
(markdown placeholder because no ```)
apiVersion: v1
kind: Service
metadata:
name: admin
spec:
ports:
- name: http-port
port: 80
protocol: TCP
targetPort: http-port
selector:
app: admin
sessionAffinity: ClientIP
type: ClusterIP
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: admin
spec:
replicas: 1
template:
metadata:
labels:
app: admin
name: admin
spec:
containers:
- image: example/admin:latest
name: admin
ports:
- containerPort: 80
name: http-port
resources:
requests:
cpu: 500m
memory: 1000Mi
volumeMounts:
- mountPath: /etc/env-volume
name: config
readOnly: true
imagePullSecrets:
- name: cloud.docker.com-pull
volumes:
- name: config
secret:
defaultMode: 420
items:
- key: admin.sh
mode: 256
path: env.sh
- key: settings.json
mode: 256
path: settings.json
secretName: env-secret
Ingress Nginx Docker Image
note default-ssl-certificate at bottom
logging is great -v below
note the Service will create an ELB on AWS which can be used to configure DNS.
(markdown placeholder because no ```)
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress-service
spec:
ports:
- name: http-port
port: 80
protocol: TCP
targetPort: http-port
- name: https-port
port: 443
protocol: TCP
targetPort: https-port
selector:
app: nginx-ingress-service
sessionAffinity: None
type: LoadBalancer
---
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-ingress-controller
labels:
k8s-app: nginx-ingress-lb
spec:
replicas: 1
selector:
k8s-app: nginx-ingress-lb
template:
metadata:
labels:
k8s-app: nginx-ingress-lb
name: nginx-ingress-lb
spec:
terminationGracePeriodSeconds: 60
containers:
- image: gcr.io/google_containers/nginx-ingress-controller:0.8.3
name: nginx-ingress-lb
imagePullPolicy: Always
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 1
# use downward API
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http-port
containerPort: 80
hostPort: 80
- name: https-port
containerPort: 443
hostPort: 443
# we expose 18080 to access nginx stats in url /nginx-status
# this is optional
- containerPort: 18080
hostPort: 18080
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
- --default-ssl-certificate=default/tls-secret
- --nginx-configmap=$(POD_NAMESPACE)/nginx-load-balancer-conf
- --v=2
Default Backend (this is copy/paste from .yaml file)
apiVersion: v1
kind: Service
metadata:
name: default-http-backend
labels:
k8s-app: default-http-backend
spec:
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
selector:
k8s-app: default-http-backend
---
apiVersion: v1
kind: ReplicationController
metadata:
name: default-http-backend
spec:
replicas: 1
selector:
k8s-app: default-http-backend
template:
metadata:
labels:
k8s-app: default-http-backend
spec:
terminationGracePeriodSeconds: 60
containers:
- name: default-http-backend
# Any image is permissable as long as:
# 1. It serves a 404 page at /
# 2. It serves 200 on a /healthz endpoint
image: gcr.io/google_containers/defaultbackend:1.0
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
ports:
- containerPort: 8080
resources:
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
This config uses three secrets:
tls-secret - 3 files: tls.key, tls.crt, dhparam.pem
env-secret - 2 files: admin.sh and settings.json. Container has start script to setup environment.
cloud.docker.com-pull