kubernetes django deployment with gunicorn - django

I am trying to deploy an empty image of alang/django using kubernetes on my minikube cluster.
This is my manifest file for deployment.
apiVersion: apps/v1
kind: Deployment
metadata:
name: django
labels:
app: django
spec:
replicas: 2
selector:
matchLabels:
pod: django
template:
metadata:
labels:
pod: django
spec:
restartPolicy: "Always"
containers:
- name: django
image: alang/django
ports:
- containerPort: 8000
env:
- name: POSTGRES_USER
valueFrom:
configMapKeyRef:
name: postgresql-db-configmap
key: pg-username
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgresql-db-secret
key: pg-db-password
- name: POSTGRES_HOST
value: postgres-service
- name: REDIS_HOST
value: redis-service
- name: GUNICORN_CMD_ARGS
value: "--bind 0.0.0.0:8000"
but i am facing issues with deployment , i think with gunicorn, getting this back:
TypeError: the 'package' argument is required to perform a relative import for '.wsgi'
Any way please to deploy it correctly?

Related

Does anyone know how to deploy a Django project which has Postgres Database on minikube locally. any refrences i try all of them which i found in

here is My Deployment files
------
#Depployment Django
apiVersion: apps/v1
kind: Deployment
metadata:
name: django1
labels:
app: django1
spec:
replicas: 1
selector:
matchLabels:
app: django-container
template:
metadata:
labels:
app: django-container
spec:
containers:
- name: todo
image: jayantkeer/image-of-kubernets
command: ["python manage.py makemigrations", "python manage.py migrate","python manage.py"] # runs migrations and starts the server
ports:
- containerPort: 8000
env:
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: postgres-credentials
key: user
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-credentials
key: password
- name: POSTGRES_HOST
value: postgres-service
And postgres service file
apiVersion: v1
kind: Service
metadata:
name: todo
labels:
app: todo
spec:
type: NodePort
selector:
app: django-container
ports:
- port: 8000
targetPort: 8000
-------
and postgres deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-deployment
spec:
replicas: 1
selector:
matchLabels:
app: postgres-container
template:
metadata:
labels:
app: postgres-container
tier: backend
spec:
containers:
- name: postgres-container
image: postgres:9.6.6
env:
- name: DATABASE_USER
valueFrom:
secretKeyRef:
name: postgres-credentials
key: user
- name: DATABASE_PASS
valueFrom:
secretKeyRef:
name: postgres-credentials
key: password
- name: POSTGRES_DB
value: kubernetes_django
ports:
- containerPort: 5432
volumeMounts:
- name: postgres-volume-mount
mountPath: /var/lib/postgresql/data
volumes:
- name: postgres-volume-mount
persistentVolumeClaim:
claimName: postgres-pvc

Containers in pod won't talk to each other in Kubernetes

I have three containers in a pod: nginx, redis, custom django app. It seems like none of them talk to each other with kubernetes. In docker compose they do but I can't use docker compose in production.
The django container gets this error:
[2022-06-20 21:45:49,420: ERROR/MainProcess] consumer: Cannot connect to redis://redis:6379/0: Error 111 connecting to redis:6379. Connection refused..
Trying again in 32.00 seconds... (16/100)
and the nginx container starts but never shows any traffic. Trying to connect to localhost:8000 gets no reply.
Any idea whats wrong with my yml file?
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
creationTimestamp: null
name: djangonetwork
spec:
ingress:
- from:
- podSelector:
matchLabels:
io.kompose.network/djangonetwork: "true"
podSelector:
matchLabels:
io.kompose.network/djangonetwork: "true"
---
apiVersion: v1
data:
DB_HOST: db
DB_NAME: django_db
DB_PASSWORD: password
DB_PORT: "5432"
DB_USER: user
kind: ConfigMap
metadata:
creationTimestamp: null
labels:
io.kompose.service: web
name: envs--django
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
io.kompose.service: web
name: web
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: web
strategy:
type: Recreate
template:
metadata:
labels:
io.kompose.network/djangonetwork: "true"
io.kompose.service: web
spec:
containers:
- image: nginx:alpine
name: nginxcontainer
ports:
- containerPort: 8000
- image: redis:alpine
name: rediscontainer
ports:
- containerPort: 6379
resources: {}
- env:
- name: DB_HOST
valueFrom:
configMapKeyRef:
key: DB_HOST
name: envs--django
- name: DB_NAME
valueFrom:
configMapKeyRef:
key: DB_NAME
name: envs--django
- name: DB_PASSWORD
valueFrom:
configMapKeyRef:
key: DB_PASSWORD
name: envs--django
- name: DB_PORT
valueFrom:
configMapKeyRef:
key: DB_PORT
name: envs--django
- name: DB_USER
valueFrom:
configMapKeyRef:
key: DB_USER
name: envs--django
image: localhost:5000/integration/web:latest
name: djangocontainer
ports:
- containerPort: 8000
resources: {}
restartPolicy: Always
status: {}
---
apiVersion: v1
kind: Service
metadata:
labels:
io.kompose.service: web
name: web
spec:
ports:
- name: "8000"
port: 8000
targetPort: 8000
selector:
io.kompose.service: web
You've put all three containers into a single Pod. That's usually not the preferred approach: it means you can't restart one of the containers without restarting all of them (any update to your application code requires discarding your Redis cache) and you can't individually scale the component parts (if you need five replicas of your application, do you also need five reverse proxies and can you usefully use five Redises?).
Instead, a preferred approach is to split these into three separate Deployments (or possibly use a StatefulSet for Redis with persistence). Each has a corresponding Service, and then those Service names can be used as DNS names.
A very minimal example for Redis could look like:
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
spec:
replicas: 1
template:
metadata:
labels:
service: web
component: redis
spec:
containers:
- name: redis
image: redis
ports:
- name: redis
containerPort: 6379
---
apiVersion: v1
kind: Service
metadata:
name: redis # <-- this name will be a DNS name
spec:
selector: # matches the template: { metadata: { labels: } }
service: web
component: redis
ports:
- name: redis
port: 6379
targetPort: redis # matches a containerPorts: [{ name: }]
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
spec:
...
env:
- name: REDIS_HOST
value: redis # matches the Service
If all three parts are in the same Pod, then the Service can't really distinguish which part it's talking to. In principle, between these containers, they share a network namespace and need to talk to each other as localhost; the containers: [{ name: }] have no practical effect.

how to serve static files from EKS cluster for Django?

I am new to Kubernetes. By reading some blogs and documentation I have successfully created the EKS cluster. I am using ALB(layer 7 load balancing) for my Django app. For controlling the routes/paths I am using the ALB ingress controller. But I am unable to serve my static contents for Django admin. I know that I need a webserver(Nginx) to serve my static files. I'm not sure how to configure to serve static files.
note: (I don't want to use whitenoise)
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: "backend-ingress"
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/subnets: subnet-1, subnet-2, subnet-3
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:ap-southeast-1:***:certificate/*
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
labels:
app: stage
spec:
rules:
- host: *.somedomain.com
http:
paths:
- path: /*
backend:
serviceName: backend-service
servicePort: 8000
this is the ingress yaml i am using. But whenever i am trying to visit my Django admin it's not loading the css and js files.
Deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: server-dashboard-backend
labels:
app: backend
spec:
replicas: 2
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
volumes:
- name: staticfiles
emptyDir: {}
containers:
- name: server-dashboard
image: *.dkr.ecr.ap-southeast-1.amazonaws.com/*:4
volumeMounts:
- name: staticfiles
mountPath: /data
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c" , "cp -r /static /data/"]
- name: nginx
image: nginx:stable
ports:
- containerPort: 80
volumeMounts:
- name: staticfiles
mountPath: /data
I solved the problem creating a pod with the Django BE and Nginx reverse-proxy, sharing the static files volume:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
labels:
app: myapp
spec:
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
volumes:
- name: staticfiles
emptyDir: {}
containers:
- name: nginx
image: ...
ports:
- containerPort: 80
volumeMounts:
- name: staticfiles
mountPath: /data
- name: django
image: ...
ports:
- containerPort: 8000
volumeMounts:
- name: staticfiles
mountPath: /data
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "cp -r /path/to/staticfiles /data/"]
Then, in the Service (and the Ingress), point the Nginx 80 port.
I have solved the problem.
i removed the command ["/bin/sh", "-c", "cp -r /path/to/staticfiles /data/"]
I was mounting in the wrong path. So the new deployment file is:
apiVersion: apps/v1
kind: Deployment
metadata:
name: server-dashboard-backend
labels:
app: backend
spec:
replicas: 2
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
volumes:
- name: staticfiles
emptyDir: {}
containers:
- name: server-dashboard
image: *.dkr.ecr.ap-southeast-1.amazonaws.com/*:4
volumeMounts:
- name: staticfiles
mountPath: /usr/src/code/static
- name: nginx
image: nginx:stable
ports:
- containerPort: 80
volumeMounts:
- name: staticfiles
mountPath: /usr/share/nginx/html/static/

None persistent Prometheus metrics on Kubernetes

I'm collecting Prometheus metrics from a uwsgi application hosted on Kubernetes, the metrics are not retained after the pods are deleted. Prometheus server is hosted on the same kubernetes cluster and I have assigned a persistent storage to it.
How do I retain the metrics from the pods even after they deleted?
The Prometheus deployment yaml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: prometheus
namespace: default
spec:
replicas: 1
template:
metadata:
labels:
app: prometheus
spec:
containers:
- name: prometheus
image: prom/prometheus
args:
- "--config.file=/etc/prometheus/prometheus.yml"
- "--storage.tsdb.path=/prometheus/"
- "--storage.tsdb.retention=2200h"
ports:
- containerPort: 9090
volumeMounts:
- name: prometheus-config-volume
mountPath: /etc/prometheus/
- name: prometheus-storage-volume
mountPath: /prometheus/
volumes:
- name: prometheus-config-volume
configMap:
defaultMode: 420
name: prometheus-server-conf
- name: prometheus-storage-volume
persistentVolumeClaim:
claimName: azurefile
---
apiVersion: v1
kind: Service
metadata:
labels:
app: prometheus
name: prometheus
spec:
type: LoadBalancer
loadBalancerIP: ...
ports:
- port: 80
protocol: TCP
targetPort: 9090
selector:
app: prometheus
Application deployment yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-app
spec:
replicas: 2
selector:
matchLabels:
app: api-app
template:
metadata:
labels:
app: api-app
spec:
containers:
- name: nginx
image: nginx
lifecycle:
preStop:
exec:
command: ["/usr/sbin/nginx","-s","quit"]
ports:
- containerPort: 80
protocol: TCP
resources:
limits:
cpu: 50m
memory: 100Mi
requests:
cpu: 10m
memory: 50Mi
volumeMounts:
- name: app-api
mountPath: /var/run/app
- name: nginx-conf
mountPath: /etc/nginx/conf.d
- name: api-app
image: azurecr.io/app_api_se:opencv
workingDir: /app
command: ["/usr/local/bin/uwsgi"]
args:
- "--die-on-term"
- "--manage-script-name"
- "--mount=/=api:app_dispatch"
- "--socket=/var/run/app/uwsgi.sock"
- "--chmod-socket=777"
- "--pyargv=se"
- "--metrics-dir=/storage"
- "--metrics-dir-restore"
resources:
requests:
cpu: 150m
memory: 1Gi
volumeMounts:
- name: app-api
mountPath: /var/run/app
- name: storage
mountPath: /storage
volumes:
- name: app-api
emptyDir: {}
- name: storage
persistentVolumeClaim:
claimName: app-storage
- name: nginx-conf
configMap:
name: app
tolerations:
- key: "sku"
operator: "Equal"
value: "test"
effect: "NoSchedule"
---
apiVersion: v1
kind: Service
metadata:
labels:
app: api-app
name: api-app
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: api-app
Your issue is with the wrong type of controller used to deploy Prometheus. The Deployment controller is wrong choice in this case (it's meant for Stateless applications, that don't need to maintain any persistence identifiers between Pods rescheduling - like persistence data).
You should switch to StatefulSet kind*, if you require persistence of data (metrics scraped by Prometheus) across Pod (re)scheduling.
*This is how Prometheus is deployed by default with prometheus-operator.
With this configuration for a volume, it will be removed when you release a pod. You are basically looking for a PersistentVolumne, documentation and example.
Also check, PersistentVolumeClaim.

Failed to connect to cloud SQL using proxy despite following the documentation

I'm using doctrine ORM with Symfony, the PHP framework. I'm getting bizarre behaviour when trying to connect to cloud SQL using GKE.
I'm able to get a connection to the DB via doctrine on command line, for example php bin/console doctrine:database:create is successful and I can see a connection opened in the proxy pod logs.
But when I try and connect to the DB via doctrine in my application I run into this error without fail:
An exception occurred in driver: SQLSTATE[HY000] [2002] php_network_getaddresses: getaddrinfo failed: Name or service not known
I have been trying to get my head around this but it doesn't make sense, why would I be able to connect via command line but not in my application?
I followed the documentation here for setting up a db connection using cloud proxy. This is my Kubernetes deployment:
---
apiVersion: "extensions/v1beta1"
kind: "Deployment"
metadata:
name: "riptides-api"
namespace: "default"
labels:
app: "riptides-api"
microservice: "riptides"
spec:
replicas: 3
selector:
matchLabels:
app: "riptides-api"
microservice: "riptides"
template:
metadata:
labels:
app: "riptides-api"
microservice: "riptides"
spec:
containers:
- name: "api-sha256"
image: "eu.gcr.io/riptides/api#sha256:ce0ead9d1dd04d7bfc129998eca6efb58cb779f4f3e41dcc3681c9aac1156867"
env:
- name: DB_HOST
value: 127.0.0.1:3306
- name: DB_USER
valueFrom:
secretKeyRef:
name: riptides-mysql-user-skye
key: user
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: riptides-mysql-user-skye
key: password
- name: DB_NAME
value: riptides
lifecycle:
postStart:
exec:
command: ["/bin/bash", "-c", "php bin/console doctrine:migrations:migrate -n"]
volumeMounts:
- name: keys
mountPath: "/app/config/jwt"
readOnly: true
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/cloud_sql_proxy",
"-instances=riptides:europe-west4:riptides-sql=tcp:3306",
"-credential_file=/secrets/cloudsql/credentials.json"]
# [START cloudsql_security_context]
securityContext:
runAsUser: 2 # non-root user
allowPrivilegeEscalation: false
# [END cloudsql_security_context]
volumeMounts:
- name: riptides-mysql-service-account
mountPath: /secrets/cloudsql
readOnly: true
volumes:
- name: keys
secret:
secretName: riptides-api-keys
items:
- key: private.pem
path: private.pem
- key: public.pem
path: public.pem
- name: riptides-mysql-service-account
secret:
secretName: riptides-mysql-service-account
---
apiVersion: "autoscaling/v2beta1"
kind: "HorizontalPodAutoscaler"
metadata:
name: "riptides-api-hpa"
namespace: "default"
labels:
app: "riptides-api"
microservice: "riptides"
spec:
scaleTargetRef:
kind: "Deployment"
name: "riptides-api"
apiVersion: "apps/v1beta1"
minReplicas: 1
maxReplicas: 5
metrics:
- type: "Resource"
resource:
name: "cpu"
targetAverageUtilization: 70
If anyone has any suggestions I'd be forever greatful
It doesn't look like anything is wrong with your k8s yaml, but more likely in how you are connecting using Symfony. According to the documentation here, Symfony expect the DB URI to be passed in through an environment variable called "DATABASE_URL". See the following example:
# customize this line!
DATABASE_URL="postgres://db_user:db_password#127.0.0.1:5432/db_name"
This was happening because doctrine was using default values instead of the (should be overriding) environment variables I had set up in my deployment. I changed the environment variable names to be different to the default ones and it works