I am new to Kubernetes. By reading some blogs and documentation I have successfully created the EKS cluster. I am using ALB(layer 7 load balancing) for my Django app. For controlling the routes/paths I am using the ALB ingress controller. But I am unable to serve my static contents for Django admin. I know that I need a webserver(Nginx) to serve my static files. I'm not sure how to configure to serve static files.
note: (I don't want to use whitenoise)
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: "backend-ingress"
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/subnets: subnet-1, subnet-2, subnet-3
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:ap-southeast-1:***:certificate/*
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
labels:
app: stage
spec:
rules:
- host: *.somedomain.com
http:
paths:
- path: /*
backend:
serviceName: backend-service
servicePort: 8000
this is the ingress yaml i am using. But whenever i am trying to visit my Django admin it's not loading the css and js files.
Deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: server-dashboard-backend
labels:
app: backend
spec:
replicas: 2
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
volumes:
- name: staticfiles
emptyDir: {}
containers:
- name: server-dashboard
image: *.dkr.ecr.ap-southeast-1.amazonaws.com/*:4
volumeMounts:
- name: staticfiles
mountPath: /data
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c" , "cp -r /static /data/"]
- name: nginx
image: nginx:stable
ports:
- containerPort: 80
volumeMounts:
- name: staticfiles
mountPath: /data
I solved the problem creating a pod with the Django BE and Nginx reverse-proxy, sharing the static files volume:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
labels:
app: myapp
spec:
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
volumes:
- name: staticfiles
emptyDir: {}
containers:
- name: nginx
image: ...
ports:
- containerPort: 80
volumeMounts:
- name: staticfiles
mountPath: /data
- name: django
image: ...
ports:
- containerPort: 8000
volumeMounts:
- name: staticfiles
mountPath: /data
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "cp -r /path/to/staticfiles /data/"]
Then, in the Service (and the Ingress), point the Nginx 80 port.
I have solved the problem.
i removed the command ["/bin/sh", "-c", "cp -r /path/to/staticfiles /data/"]
I was mounting in the wrong path. So the new deployment file is:
apiVersion: apps/v1
kind: Deployment
metadata:
name: server-dashboard-backend
labels:
app: backend
spec:
replicas: 2
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
volumes:
- name: staticfiles
emptyDir: {}
containers:
- name: server-dashboard
image: *.dkr.ecr.ap-southeast-1.amazonaws.com/*:4
volumeMounts:
- name: staticfiles
mountPath: /usr/src/code/static
- name: nginx
image: nginx:stable
ports:
- containerPort: 80
volumeMounts:
- name: staticfiles
mountPath: /usr/share/nginx/html/static/
Related
here is My Deployment files
------
#Depployment Django
apiVersion: apps/v1
kind: Deployment
metadata:
name: django1
labels:
app: django1
spec:
replicas: 1
selector:
matchLabels:
app: django-container
template:
metadata:
labels:
app: django-container
spec:
containers:
- name: todo
image: jayantkeer/image-of-kubernets
command: ["python manage.py makemigrations", "python manage.py migrate","python manage.py"] # runs migrations and starts the server
ports:
- containerPort: 8000
env:
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: postgres-credentials
key: user
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-credentials
key: password
- name: POSTGRES_HOST
value: postgres-service
And postgres service file
apiVersion: v1
kind: Service
metadata:
name: todo
labels:
app: todo
spec:
type: NodePort
selector:
app: django-container
ports:
- port: 8000
targetPort: 8000
-------
and postgres deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-deployment
spec:
replicas: 1
selector:
matchLabels:
app: postgres-container
template:
metadata:
labels:
app: postgres-container
tier: backend
spec:
containers:
- name: postgres-container
image: postgres:9.6.6
env:
- name: DATABASE_USER
valueFrom:
secretKeyRef:
name: postgres-credentials
key: user
- name: DATABASE_PASS
valueFrom:
secretKeyRef:
name: postgres-credentials
key: password
- name: POSTGRES_DB
value: kubernetes_django
ports:
- containerPort: 5432
volumeMounts:
- name: postgres-volume-mount
mountPath: /var/lib/postgresql/data
volumes:
- name: postgres-volume-mount
persistentVolumeClaim:
claimName: postgres-pvc
I have three containers in a pod: nginx, redis, custom django app. It seems like none of them talk to each other with kubernetes. In docker compose they do but I can't use docker compose in production.
The django container gets this error:
[2022-06-20 21:45:49,420: ERROR/MainProcess] consumer: Cannot connect to redis://redis:6379/0: Error 111 connecting to redis:6379. Connection refused..
Trying again in 32.00 seconds... (16/100)
and the nginx container starts but never shows any traffic. Trying to connect to localhost:8000 gets no reply.
Any idea whats wrong with my yml file?
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
creationTimestamp: null
name: djangonetwork
spec:
ingress:
- from:
- podSelector:
matchLabels:
io.kompose.network/djangonetwork: "true"
podSelector:
matchLabels:
io.kompose.network/djangonetwork: "true"
---
apiVersion: v1
data:
DB_HOST: db
DB_NAME: django_db
DB_PASSWORD: password
DB_PORT: "5432"
DB_USER: user
kind: ConfigMap
metadata:
creationTimestamp: null
labels:
io.kompose.service: web
name: envs--django
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
io.kompose.service: web
name: web
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: web
strategy:
type: Recreate
template:
metadata:
labels:
io.kompose.network/djangonetwork: "true"
io.kompose.service: web
spec:
containers:
- image: nginx:alpine
name: nginxcontainer
ports:
- containerPort: 8000
- image: redis:alpine
name: rediscontainer
ports:
- containerPort: 6379
resources: {}
- env:
- name: DB_HOST
valueFrom:
configMapKeyRef:
key: DB_HOST
name: envs--django
- name: DB_NAME
valueFrom:
configMapKeyRef:
key: DB_NAME
name: envs--django
- name: DB_PASSWORD
valueFrom:
configMapKeyRef:
key: DB_PASSWORD
name: envs--django
- name: DB_PORT
valueFrom:
configMapKeyRef:
key: DB_PORT
name: envs--django
- name: DB_USER
valueFrom:
configMapKeyRef:
key: DB_USER
name: envs--django
image: localhost:5000/integration/web:latest
name: djangocontainer
ports:
- containerPort: 8000
resources: {}
restartPolicy: Always
status: {}
---
apiVersion: v1
kind: Service
metadata:
labels:
io.kompose.service: web
name: web
spec:
ports:
- name: "8000"
port: 8000
targetPort: 8000
selector:
io.kompose.service: web
You've put all three containers into a single Pod. That's usually not the preferred approach: it means you can't restart one of the containers without restarting all of them (any update to your application code requires discarding your Redis cache) and you can't individually scale the component parts (if you need five replicas of your application, do you also need five reverse proxies and can you usefully use five Redises?).
Instead, a preferred approach is to split these into three separate Deployments (or possibly use a StatefulSet for Redis with persistence). Each has a corresponding Service, and then those Service names can be used as DNS names.
A very minimal example for Redis could look like:
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
spec:
replicas: 1
template:
metadata:
labels:
service: web
component: redis
spec:
containers:
- name: redis
image: redis
ports:
- name: redis
containerPort: 6379
---
apiVersion: v1
kind: Service
metadata:
name: redis # <-- this name will be a DNS name
spec:
selector: # matches the template: { metadata: { labels: } }
service: web
component: redis
ports:
- name: redis
port: 6379
targetPort: redis # matches a containerPorts: [{ name: }]
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
spec:
...
env:
- name: REDIS_HOST
value: redis # matches the Service
If all three parts are in the same Pod, then the Service can't really distinguish which part it's talking to. In principle, between these containers, they share a network namespace and need to talk to each other as localhost; the containers: [{ name: }] have no practical effect.
After kubectl apply -f pvc.yaml the below yaml file, I can able to find the mount path /var/local/pvctest inside the container that has been created. But, the host path /var/local/pvctest in the worker node is not created.
I'm new to PV & PVC with EKS and any help to fix this issue is much appreciated!
kind: Deployment
apiVersion: apps/v1
metadata:
name: pvctest
labels:
alias: pvctest
spec:
selector:
matchLabels:
alias: pvctest
replicas: 1
template:
metadata:
labels:
alias: pvctest
spec:
containers:
- name: pvctest
image: neo4j
ports:
- containerPort: 7474
- containerPort: 7687
volumeMounts:
- name: testpv
mountPath: /var/local/pvctest
volumes:
- name: testpv
persistentVolumeClaim:
claimName: pvctest-claim
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: pvtest
labels:
type: local
spec:
persistentVolumeReclaimPolicy: Retain
capacity:
storage: 3Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /var/local/pvctest
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvctest-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
PersistentVolume with hostPath requires the directory on the host to be pre-created. If you want the directory to be created automatically for you:
...
containers:
- name: pvctest
image: neo4j
...
volumeMounts:
- name: testpv
mountPath: /var/local/pvctest
volumes:
- name: testpv
hostPath:
path: /data
type: DirectoryOrCreate
PV/PVC is actually optonal for hostPath.
I am trying to deploy an empty image of alang/django using kubernetes on my minikube cluster.
This is my manifest file for deployment.
apiVersion: apps/v1
kind: Deployment
metadata:
name: django
labels:
app: django
spec:
replicas: 2
selector:
matchLabels:
pod: django
template:
metadata:
labels:
pod: django
spec:
restartPolicy: "Always"
containers:
- name: django
image: alang/django
ports:
- containerPort: 8000
env:
- name: POSTGRES_USER
valueFrom:
configMapKeyRef:
name: postgresql-db-configmap
key: pg-username
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgresql-db-secret
key: pg-db-password
- name: POSTGRES_HOST
value: postgres-service
- name: REDIS_HOST
value: redis-service
- name: GUNICORN_CMD_ARGS
value: "--bind 0.0.0.0:8000"
but i am facing issues with deployment , i think with gunicorn, getting this back:
TypeError: the 'package' argument is required to perform a relative import for '.wsgi'
Any way please to deploy it correctly?
I'm collecting Prometheus metrics from a uwsgi application hosted on Kubernetes, the metrics are not retained after the pods are deleted. Prometheus server is hosted on the same kubernetes cluster and I have assigned a persistent storage to it.
How do I retain the metrics from the pods even after they deleted?
The Prometheus deployment yaml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: prometheus
namespace: default
spec:
replicas: 1
template:
metadata:
labels:
app: prometheus
spec:
containers:
- name: prometheus
image: prom/prometheus
args:
- "--config.file=/etc/prometheus/prometheus.yml"
- "--storage.tsdb.path=/prometheus/"
- "--storage.tsdb.retention=2200h"
ports:
- containerPort: 9090
volumeMounts:
- name: prometheus-config-volume
mountPath: /etc/prometheus/
- name: prometheus-storage-volume
mountPath: /prometheus/
volumes:
- name: prometheus-config-volume
configMap:
defaultMode: 420
name: prometheus-server-conf
- name: prometheus-storage-volume
persistentVolumeClaim:
claimName: azurefile
---
apiVersion: v1
kind: Service
metadata:
labels:
app: prometheus
name: prometheus
spec:
type: LoadBalancer
loadBalancerIP: ...
ports:
- port: 80
protocol: TCP
targetPort: 9090
selector:
app: prometheus
Application deployment yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-app
spec:
replicas: 2
selector:
matchLabels:
app: api-app
template:
metadata:
labels:
app: api-app
spec:
containers:
- name: nginx
image: nginx
lifecycle:
preStop:
exec:
command: ["/usr/sbin/nginx","-s","quit"]
ports:
- containerPort: 80
protocol: TCP
resources:
limits:
cpu: 50m
memory: 100Mi
requests:
cpu: 10m
memory: 50Mi
volumeMounts:
- name: app-api
mountPath: /var/run/app
- name: nginx-conf
mountPath: /etc/nginx/conf.d
- name: api-app
image: azurecr.io/app_api_se:opencv
workingDir: /app
command: ["/usr/local/bin/uwsgi"]
args:
- "--die-on-term"
- "--manage-script-name"
- "--mount=/=api:app_dispatch"
- "--socket=/var/run/app/uwsgi.sock"
- "--chmod-socket=777"
- "--pyargv=se"
- "--metrics-dir=/storage"
- "--metrics-dir-restore"
resources:
requests:
cpu: 150m
memory: 1Gi
volumeMounts:
- name: app-api
mountPath: /var/run/app
- name: storage
mountPath: /storage
volumes:
- name: app-api
emptyDir: {}
- name: storage
persistentVolumeClaim:
claimName: app-storage
- name: nginx-conf
configMap:
name: app
tolerations:
- key: "sku"
operator: "Equal"
value: "test"
effect: "NoSchedule"
---
apiVersion: v1
kind: Service
metadata:
labels:
app: api-app
name: api-app
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: api-app
Your issue is with the wrong type of controller used to deploy Prometheus. The Deployment controller is wrong choice in this case (it's meant for Stateless applications, that don't need to maintain any persistence identifiers between Pods rescheduling - like persistence data).
You should switch to StatefulSet kind*, if you require persistence of data (metrics scraped by Prometheus) across Pod (re)scheduling.
*This is how Prometheus is deployed by default with prometheus-operator.
With this configuration for a volume, it will be removed when you release a pod. You are basically looking for a PersistentVolumne, documentation and example.
Also check, PersistentVolumeClaim.