I have three containers in a pod: nginx, redis, custom django app. It seems like none of them talk to each other with kubernetes. In docker compose they do but I can't use docker compose in production.
The django container gets this error:
[2022-06-20 21:45:49,420: ERROR/MainProcess] consumer: Cannot connect to redis://redis:6379/0: Error 111 connecting to redis:6379. Connection refused..
Trying again in 32.00 seconds... (16/100)
and the nginx container starts but never shows any traffic. Trying to connect to localhost:8000 gets no reply.
Any idea whats wrong with my yml file?
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
creationTimestamp: null
name: djangonetwork
spec:
ingress:
- from:
- podSelector:
matchLabels:
io.kompose.network/djangonetwork: "true"
podSelector:
matchLabels:
io.kompose.network/djangonetwork: "true"
---
apiVersion: v1
data:
DB_HOST: db
DB_NAME: django_db
DB_PASSWORD: password
DB_PORT: "5432"
DB_USER: user
kind: ConfigMap
metadata:
creationTimestamp: null
labels:
io.kompose.service: web
name: envs--django
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
io.kompose.service: web
name: web
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: web
strategy:
type: Recreate
template:
metadata:
labels:
io.kompose.network/djangonetwork: "true"
io.kompose.service: web
spec:
containers:
- image: nginx:alpine
name: nginxcontainer
ports:
- containerPort: 8000
- image: redis:alpine
name: rediscontainer
ports:
- containerPort: 6379
resources: {}
- env:
- name: DB_HOST
valueFrom:
configMapKeyRef:
key: DB_HOST
name: envs--django
- name: DB_NAME
valueFrom:
configMapKeyRef:
key: DB_NAME
name: envs--django
- name: DB_PASSWORD
valueFrom:
configMapKeyRef:
key: DB_PASSWORD
name: envs--django
- name: DB_PORT
valueFrom:
configMapKeyRef:
key: DB_PORT
name: envs--django
- name: DB_USER
valueFrom:
configMapKeyRef:
key: DB_USER
name: envs--django
image: localhost:5000/integration/web:latest
name: djangocontainer
ports:
- containerPort: 8000
resources: {}
restartPolicy: Always
status: {}
---
apiVersion: v1
kind: Service
metadata:
labels:
io.kompose.service: web
name: web
spec:
ports:
- name: "8000"
port: 8000
targetPort: 8000
selector:
io.kompose.service: web
You've put all three containers into a single Pod. That's usually not the preferred approach: it means you can't restart one of the containers without restarting all of them (any update to your application code requires discarding your Redis cache) and you can't individually scale the component parts (if you need five replicas of your application, do you also need five reverse proxies and can you usefully use five Redises?).
Instead, a preferred approach is to split these into three separate Deployments (or possibly use a StatefulSet for Redis with persistence). Each has a corresponding Service, and then those Service names can be used as DNS names.
A very minimal example for Redis could look like:
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
spec:
replicas: 1
template:
metadata:
labels:
service: web
component: redis
spec:
containers:
- name: redis
image: redis
ports:
- name: redis
containerPort: 6379
---
apiVersion: v1
kind: Service
metadata:
name: redis # <-- this name will be a DNS name
spec:
selector: # matches the template: { metadata: { labels: } }
service: web
component: redis
ports:
- name: redis
port: 6379
targetPort: redis # matches a containerPorts: [{ name: }]
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
spec:
...
env:
- name: REDIS_HOST
value: redis # matches the Service
If all three parts are in the same Pod, then the Service can't really distinguish which part it's talking to. In principle, between these containers, they share a network namespace and need to talk to each other as localhost; the containers: [{ name: }] have no practical effect.
Related
I'm new to using Kubernetes and AWS so there are a lot of concepts I may not understand. I hope you can help me with this problem I am having.
I have 3 services, frontend, backend and auth each with their corresponding nodeport and an ingress that maps the one host to each service, everything is running on EKS and for the ingress deployment I am using AWS ingress controller. Once everything is deployed I try to register the node-group in the targets the frontend and auth services work correctly but backend stays in unhealthy state. I thought it could be a port problem but if you look at auth and backend they have almost the same deployment defined and both are api created with dotnet core. One thing to note is that I can do kubectl port-forward <backend-pod> 80:80 and it is running without problems. And when I run the kubectl describe ingresses command I get this:
Name: ingress
Labels: app.kubernetes.io/managed-by=Helm
Namespace: default
Address: xxxxxxxxxxxxxxxxxxxxxxxxxxx.xxxxx.elb.amazonaws.com
Ingress Class: \<none\>
Default backend: \<default\>
Rules:
Host Path Backends
----------------
domain.com
/ front-service:default-port (10.0.1.183:80,10.0.2.98:80)
back.domain.com
/ backend-service:default-port (\<none\>)
auth.domain.com
/ auth-service:default-port (10.0.1.30:80,10.0.1.33:80)
alb.ingress.kubernetes.io/listen-ports: \[{"HTTPS":443}, {"HTTP":80}\]
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/ssl-redirect: 443
kubernetes.io/ingress.class: alb
Events:
Type Reason Age From Message
-------------------------
Normal SuccessfullyReconciled 8m20s (x15 over 41h) ingress Successfully reconciled
Frontend
apiVersion: apps/v1
kind: Deployment
metadata:
name: front
labels:
name: front
spec:
replicas: 2
selector:
matchLabels:
name: front
template:
metadata:
labels:
name: front
spec:
containers:
- name: frontend
image: {{ .Values.image }}
imagePullPolicy: Always
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: wrfront-{{ .Values.namespace }}-service
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
name: default-port
protocol: TCP
selector:
name: front
---
Auth
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-wrauth-keys
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 200Gi
---
apiVersion: "v1"
kind: "ConfigMap"
metadata:
name: "auth-config-ocpm"
labels:
app: "auth"
data:
ASPNETCORE_URL: "http://+:80"
ASPNETCORE_ENVIRONMENT: "Development"
ASPNETCORE_LOGGINGCONSOLEDISABLECOLORS: "true"
---
apiVersion: "apps/v1"
kind: "Deployment"
metadata:
name: "auth"
labels:
app: "auth"
spec:
replicas: 2
strategy:
type: Recreate
selector:
matchLabels:
app: "auth"
template:
metadata:
labels:
app: "auth"
spec:
volumes:
- name: auth-keys-storage
persistentVolumeClaim:
claimName: pvc-wrauth-keys
containers:
- name: "api-auth"
image: {{ .Values.image }}
imagePullPolicy: Always
ports:
- containerPort: 80
volumeMounts:
- name: auth-keys-storage
mountPath: "/app/auth-keys"
env:
- name: "ASPNETCORE_URL"
valueFrom:
configMapKeyRef:
key: "ASPNETCORE_URL"
name: "auth-config-ocpm"
- name: "ASPNETCORE_ENVIRONMENT"
valueFrom:
configMapKeyRef:
key: "ASPNETCORE_ENVIRONMENT"
name: "auth-config-ocpm"
- name: "ASPNETCORE_LOGGINGCONSOLEDISABLECOLORS"
valueFrom:
configMapKeyRef:
key: "ASPNETCORE_LOGGINGCONSOLEDISABLECOLORS"
name: "auth-config-ocpm"
---
apiVersion: v1
kind: Service
metadata:
name: auth-service
spec:
type: NodePort
selector:
app: auth
ports:
- name: default-port
protocol: TCP
port: 80
targetPort: 80
Backend (Service with problem)
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend
labels:
app: backend
spec:
replicas: 2
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
containers:
- name: backend
image: {{ .Values.image }}
imagePullPolicy: Always
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: backend-service
spec:
type: NodePort
selector:
name: backend
ports:
- name: default-port
protocol: TCP
port: 80
targetPort: 80
Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
kubernetes.io/ingress.class: alb
# SSL Settings
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}, {"HTTP":80}]'
alb.ingress.kubernetes.io/ssl-redirect: '443'
alb.ingress.kubernetes.io/certificate-arn: {{ .Values.certificate }}
spec:
rules:
- host: {{ .Values.host }}
http:
paths:
- path: /
backend:
service:
name: front-service
port:
name: default-port
pathType: Prefix
- host: back.{{ .Values.host }}
http:
paths:
- path: /
backend:
service:
name: backend-service
port:
name: default-port
pathType: Prefix
- host: auth.{{ .Values.host }}
http:
paths:
- path: /
backend:
service:
name: auth-service
port:
name: default-port
pathType: Prefix
I've tried to deploy other services and they work correctly, also running only backend or only another service, but always the same thing happens and always with the backend.
What could be happening? Is it a configuration problem? Some error in Ingress or Deployment? Or is it just the backend service?
I would be very grateful for any help.
domain.com
/ front-service:default-port (10.0.1.183:80,10.0.2.98:80)
back.domain.com
/ backend-service:default-port (\<none\>)
auth.domain.com
/ auth-service:default-port (10.0.1.30:80,10.0.1.33:80)
This one is saying that your backend service is not registered to the Ingress.
One thing to remember is that Ingress registers Services by pods' ClusterIP, like your Ingress output "10.0.1.30:80", not NodePort. And according to docs , I don't know why you can have multiple NodePort services with the same port. But when you do port-forward, you actually open that port on all your instances, I assume you have 2 instances, and then your ALB health check that port and return healthy.
But I think your issue is from your Ingress that can not locate your backend service.
My suggestions are:
Trying with only backend-service with port changed, and maybe without auth and frontend services. Default range of NodePort is 30000 - 32767
Going inside that pod or create a new pod, make a request to that service using its URL to check what it returns. By default, ALB only accept status 200 from its homepage.
I have an AWS EKS cluster, in which there are two pods running: one for a redis cache, and the other for a GraphQL API.
Now I will present the config files for my kubernetes cluster.
cache-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: chatto-cache-deployment
spec:
replicas: 1
selector:
matchLabels:
tier: cache
template:
metadata:
labels:
tier: cache
spec:
containers:
- name: cache-container
image: redis
imagePullPolicy: Always
resources:
limits:
memory: 512Mi
cpu: "1"
requests:
memory: 256Mi
cpu: "0.2"
cache-service.yaml
apiVersion: v1
kind: Service
metadata:
name: chatto-cache-service
spec:
type: ClusterIP
selector:
tier: cache
ports:
- protocol: TCP
port: 6379
targetPort: 6379
server-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: chatto-server-deployment
spec:
replicas: 1
selector:
matchLabels:
tier: server
template:
metadata:
labels:
tier: server
spec:
containers:
- name: server-container
image: rocketblast2481/chatto-server
imagePullPolicy: Always
resources:
limits:
memory: 512Mi
cpu: "1"
requests:
memory: 256Mi
cpu: "0.2"
env:
- name: DB_USERNAME
valueFrom:
configMapKeyRef:
name: env-config-map
key: DB_USERNAME
- name: DB_PASSWORD
valueFrom:
configMapKeyRef:
name: env-config-map
key: DB_PASSWORD
- name: DB_HOST
valueFrom:
configMapKeyRef:
name: env-config-map
key: DB_HOST
- name: DB_PORT
valueFrom:
configMapKeyRef:
name: env-config-map
key: DB_PORT
- name: DB_DATABASE
valueFrom:
configMapKeyRef:
name: env-config-map
key: DB_DATABASE
- name: REDIS_HOST
valueFrom:
configMapKeyRef:
name: env-config-map
key: REDIS_HOST
- name: REDIS_PORT
valueFrom:
configMapKeyRef:
name: env-config-map
key: REDIS_PORT
server-service.yaml
apiVersion: v1
kind: Service
metadata:
name: chatto-server-service
spec:
type: LoadBalancer
selector:
tier: server
ports:
- protocol: TCP
port: 80
targetPort: 80
Okay, here's the problem. As you can see, the server-service is of type LoadBalancer.
When I run the command, kubectl get services, this is the output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
chatto-cache-service ClusterIP 10.100.73.117 <none> 6379/TCP 5h17m
chatto-server-service LoadBalancer 10.100.11.249 afe3adae38c0242c8be0795609ee8a6c-424128423.us-east-2.elb.amazonaws.com 80:31643/TCP 17m
kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 6h35m
I copied the EXTERNAL-IP of the chatto-server-service and slapped it into Postman, but I get an error that the connection refused.
Could someone tell me why this might be happening? It might be because of the way I have configured the security groups, but I don't know for sure.
Thanks for any feedback and help and please let me know if you need any other information.
Edit:
Here is a screenshot of the load balancer:
I have been trying to set up RabbitMQ on a k8s cluster, I finally got everything set up, but only one node shows up on the managementUI. Here are my steps:
1. Dockerfile Setup
I do this to enable autocluster:
FROM rabbitmq:3.8-rc-management-alpine
MAINTAINER kevlai
RUN rabbitmq-plugins --offline enable rabbitmq_peer_discovery_k8s
2. Set up RBAC
apiVersion: v1
kind: ServiceAccount
metadata:
name: borecast-rabbitmq
namespace: borecast-production
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: borecast-rabbitmq
namespace: borecast-production
rules:
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: borecast-rabbitmq
namespace: borecast-production
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: dev
subjects:
- kind: ServiceAccount
name: borecast-rabbitmq
namespace: borecast-production
3. Set up Secrets
apiVersion: v1
kind: Secret
metadata:
name: rabbitmq-secret
namespace: borecast-production
type: Opaque
data:
username: a2V2
password: Ym9yZWNhc3RydWx6
secretCookie: c2VjcmV0Y29va2llaGVyZQ==
4. Set up StorageClass
I'm setting up StorageClass so k8s will automatically do provision for me on AWS.
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: rabbitmq-sc
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
zone: us-east-2a
reclaimPolicy: Retain
5. Set up StatefulSets and Services
You can see there are two services. The headless service is for the pods themselves. As for the management service, I'll expose the service for an Ingress controller in order for it to be accessible from outside.
---
apiVersion: v1
kind: Service
metadata:
name: borecast-rabbitmq-management-service
namespace: borecast-production
labels:
app: borecast-rabbitmq
spec:
ports:
- port: 15672
targetPort: 15672
name: http
- port: 5672
targetPort: 5672
name: amqp
selector:
app: borecast-rabbitmq
---
apiVersion: v1
kind: Service
metadata:
name: borecast-rabbitmq-service
namespace: borecast-production
labels:
app: borecast-rabbitmq
spec:
clusterIP: None
ports:
- port: 5672
name: amqp
selector:
app: borecast-rabbitmq
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: borecast-rabbitmq
namespace: borecast-production
spec:
serviceName: borecast-rabbitmq-service
replicas: 3
template:
metadata:
labels:
app: borecast-rabbitmq
spec:
serviceAccountName: borecast-rabbitmq
containers:
- image: docker.borecast.com/borecast-rabbitmq:v1.0.3
name: borecast-rabbitmq
imagePullPolicy: Always
resources:
requests:
memory: "256Mi"
cpu: "150m"
limits:
memory: "512Mi"
cpu: "250m"
ports:
- containerPort: 5672
name: amqp
env:
- name: RABBITMQ_DEFAULT_USER
valueFrom:
secretKeyRef:
name: rabbitmq-secret
key: username
- name: RABBITMQ_DEFAULT_PASS
valueFrom:
secretKeyRef:
name: rabbitmq-secret
key: password
- name: RABBITMQ_ERLANG_COOKIE
valueFrom:
secretKeyRef:
name: rabbitmq-secret
key: secretCookie
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: K8S_SERVICE_NAME
# value: borecast-rabbitmq-service.borecast-production.svc.cluster.local
value: borecast-rabbitmq-service
- name: RABBITMQ_USE_LONGNAME
value: "true"
- name: RABBITMQ_NODENAME
value: "rabbit#$(MY_POD_NAME).$(K8S_SERVICE_NAME)"
# value: rabbit#$(MY_POD_NAME).borecast-rabbitmq-service.borecast-production.svc.cluster.local
- name: RABBITMQ_NODE_TYPE
value: disc
- name: AUTOCLUSTER_TYPE
value: "k8s"
- name: AUTOCLUSTER_DELAY
value: "10"
- name: AUTOCLUSTER_CLEANUP
value: "true"
- name: CLEANUP_WARN_ONLY
value: "false"
- name: K8S_ADDRESS_TYPE
value: "hostname"
- name: K8S_HOSTNAME_SUFFIX
value: ".$(K8S_SERVICE_NAME)"
# value: .borecast-rabbitmq-service.borecast-production.svc.cluster.local
volumeMounts:
- name: rabbitmq-volume
mountPath: /var/lib/rabbitmq
imagePullSecrets:
- name: regcred
volumeClaimTemplates:
- metadata:
name: rabbitmq-volume
namespace: borecast-production
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: rabbitmq-sc
resources:
requests:
storage: 5Gi
Problem
Everything is working. However, when I access the management UI (i.e. I'm access the borecast-rabbitmq-management-service, port 15672), I only see one node showing up, when it should be three:
Also notice that the cluster name is
rabbit#borecast-rabbitmq-0.borecast-rabbitmq-service.borecast-production.svc.cluster.local
but when I log out and log in again, sometimes the number 0 will be changed to 1 or 2 for borecast-rabbitmq-0.
And also notice the node name is
rabbit#borecast-rabbitmq-1.borecast-rabbitmq-service
And you guessed it, sometimes the number is 2 or 0 for borecast-rabbitmq-1.
I have been trying to debug but to no avail. The logs for each pod doesn't raise any suspicions and every service and statefulset are working normally. I repeated the five steps multiple times, and if your cluster is on AWS, you can totally replicate my setup by following the steps (after creating the namespace borecast-production of course). If anybody can shed some light on the matter, I'll be eternally grateful.
The problem is with the headless service name definition:
- name: K8S_SERVICE_NAME
# value: borecast-rabbitmq-service.borecast-production.svc.cluster.local
value: borecast-rabbitmq-service
which is a building block of node name:
- name: RABBITMQ_NODENAME
value: "rabbit#$(MY_POD_NAME).$(K8S_SERVICE_NAME)"
whereas the proper node name, should be of FQDN of the POD (<statefulset name>-<ordinal index>.<headless_svc_name>.<namespace>.svc.cluster.local):
- name: RABBITMQ_NODENAME
value: "rabbit#$(MY_POD_NAME).$(K8S_SERVICE_NAME).$(MY_POD_NAMESPACE).svc.cluster.local"
Therefore you ended up with NodeName
borecast-rabbitmq-1.borecast-rabbitmq-service
instead of:
borecast-rabbitmq-1.borecast-rabbitmq-service.borecast-production.svc.cluster.local
Look up the fqdn of the pod created by borecast-rabbitmq StatefulSet (in other word: SRV records of the Pods) with nslookup util from inside of your cluster as explained here, to see what form the RABBITMQ_NODENAME is expected to have.
try exposing 4369 for headless service;
https://www.rabbitmq.com/clustering.html
see the port access section
Had the same issue, and it came down to
Deleting all the rabbitmq resources including the pvc created under the statefulset.
Then reinstalling everything from the manifests.
I'm collecting Prometheus metrics from a uwsgi application hosted on Kubernetes, the metrics are not retained after the pods are deleted. Prometheus server is hosted on the same kubernetes cluster and I have assigned a persistent storage to it.
How do I retain the metrics from the pods even after they deleted?
The Prometheus deployment yaml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: prometheus
namespace: default
spec:
replicas: 1
template:
metadata:
labels:
app: prometheus
spec:
containers:
- name: prometheus
image: prom/prometheus
args:
- "--config.file=/etc/prometheus/prometheus.yml"
- "--storage.tsdb.path=/prometheus/"
- "--storage.tsdb.retention=2200h"
ports:
- containerPort: 9090
volumeMounts:
- name: prometheus-config-volume
mountPath: /etc/prometheus/
- name: prometheus-storage-volume
mountPath: /prometheus/
volumes:
- name: prometheus-config-volume
configMap:
defaultMode: 420
name: prometheus-server-conf
- name: prometheus-storage-volume
persistentVolumeClaim:
claimName: azurefile
---
apiVersion: v1
kind: Service
metadata:
labels:
app: prometheus
name: prometheus
spec:
type: LoadBalancer
loadBalancerIP: ...
ports:
- port: 80
protocol: TCP
targetPort: 9090
selector:
app: prometheus
Application deployment yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-app
spec:
replicas: 2
selector:
matchLabels:
app: api-app
template:
metadata:
labels:
app: api-app
spec:
containers:
- name: nginx
image: nginx
lifecycle:
preStop:
exec:
command: ["/usr/sbin/nginx","-s","quit"]
ports:
- containerPort: 80
protocol: TCP
resources:
limits:
cpu: 50m
memory: 100Mi
requests:
cpu: 10m
memory: 50Mi
volumeMounts:
- name: app-api
mountPath: /var/run/app
- name: nginx-conf
mountPath: /etc/nginx/conf.d
- name: api-app
image: azurecr.io/app_api_se:opencv
workingDir: /app
command: ["/usr/local/bin/uwsgi"]
args:
- "--die-on-term"
- "--manage-script-name"
- "--mount=/=api:app_dispatch"
- "--socket=/var/run/app/uwsgi.sock"
- "--chmod-socket=777"
- "--pyargv=se"
- "--metrics-dir=/storage"
- "--metrics-dir-restore"
resources:
requests:
cpu: 150m
memory: 1Gi
volumeMounts:
- name: app-api
mountPath: /var/run/app
- name: storage
mountPath: /storage
volumes:
- name: app-api
emptyDir: {}
- name: storage
persistentVolumeClaim:
claimName: app-storage
- name: nginx-conf
configMap:
name: app
tolerations:
- key: "sku"
operator: "Equal"
value: "test"
effect: "NoSchedule"
---
apiVersion: v1
kind: Service
metadata:
labels:
app: api-app
name: api-app
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: api-app
Your issue is with the wrong type of controller used to deploy Prometheus. The Deployment controller is wrong choice in this case (it's meant for Stateless applications, that don't need to maintain any persistence identifiers between Pods rescheduling - like persistence data).
You should switch to StatefulSet kind*, if you require persistence of data (metrics scraped by Prometheus) across Pod (re)scheduling.
*This is how Prometheus is deployed by default with prometheus-operator.
With this configuration for a volume, it will be removed when you release a pod. You are basically looking for a PersistentVolumne, documentation and example.
Also check, PersistentVolumeClaim.
I've been trying to follow the example (guestbook) to reproduce another application which has to be available on a public interface.
This is my Kubernetes configuration (YAML):
apiVersion: v1
kind: Service
metadata:
name: my-app-server
labels:
app: my-app-server
tier: backend
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 3000
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-app-server
spec:
replicas: 3
template:
metadata:
labels:
app: my-app-server
tier: backend
spec:
containers:
- name: ppm-server
image: docker/container:tag
imagePullPolicy: Always
resources:
requests:
cpu: 100m
memory: 100Mi
env:
- name: GET_HOSTS_FROM
value: dns
ports:
- containerPort: 3000
imagePullSecrets:
- name: myregistrykey
Not sure why this is not working.
The guestbook all-in-one example seems to work just fine though.
I tried using the exact same configuration file while just changing the variables in the configuration.