I have a minikube running with the deployment of django app. Till today, we used server which django spins up. Now, I have added another Nginx container so that we can deploy django app cause I read django is not really for production. After reading some documentation and blogs, I configured the deployment.yaml file and it is running very much fine now.
The problem is that no static content is being served. This is really because static content is in django container and not Nginx container. (Idk if they can share volume or not, please clarify this doubt or misconception) What will be the best way so I can serve my static content?
This is my deployment file's spec:
spec:
containers:
- name: user-django-app
image: my-django-app:latest
ports:
- containerPort: 8000
env:
- name: POSTGRES_HOST
value: mysql-service
- name: POSTGRES_USER
value: admin
- name: POSTGRES_PASSWORD
value: admin
- name: POSTGRES_PORT
value: "8001"
- name: POSTGRES_DB
value: userdb
- name: user-nginx
image: nginx
volumeMounts:
- name: nginx-config
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
volumes:
- name: nginx-config
configMap:
name: nginx-config
I believe that
server {
location /static {
alias /var/www/djangoapp/static;
}
}
needs to be changed. But I don't know what should I write? Also, how can I run python manage.py migrate and python manage.py collectstatic as soon as the deployment is made.
Kindly provide resource/docs/blogs which will assist me doing this. Thank you!
Thank you.
After #willrof 's answer, this is my current YAML file.
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-deployment
labels:
app: web
spec:
replicas: 1
selector:
matchLabels:
app: web
micro-service: user
template:
metadata:
name: user
labels:
app: web
micro-service: user
spec:
containers:
- name: user-django-app
image: docker.io/dev1911/drone_plus_plus_user:latest
ports:
- containerPort: 8000
env:
- name: POSTGRES_HOST
value: mysql-service
- name: POSTGRES_USER
value: admin
- name: POSTGRES_PASSWORD
value: admin
- name: POSTGRES_PORT
value: "8001"
- name: POSTGRES_DB
value: userdb
volumeMounts:
- name: shared
mountPath: /shared
command: ["/bin/sh", "-c"]
args: ["apt-get install nano"]
- name: user-nginx
image: nginx
volumeMounts:
- name: nginx-config
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
- name: shared
mountPath: /var/www/user/static
volumes:
- name: nginx-config
configMap:
name: nginx-config
- name: shared
emptyDir: {}
And nginx-config file is
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 4096; ## Default: 1024
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format ltsv 'domain:$host\t'
'host:$remote_addr\t'
'user:$remote_user\t'
'time:$time_local\t'
'method:$request_method\t'
'path:$request_uri\t'
'protocol:$server_protocol\t'
'status:$status\t'
'size:$body_bytes_sent\t'
'referer:$http_referer\t'
'agent:$http_user_agent\t'
'response_time:$request_time\t'
'cookie:$http_cookie\t'
'set_cookie:$sent_http_set_cookie\t'
'upstream_addr:$upstream_addr\t'
'upstream_cache_status:$upstream_cache_status\t'
'upstream_response_time:$upstream_response_time';
access_log /var/log/nginx/access.log ltsv;
sendfile on;
tcp_nopush on;
server_names_hash_bucket_size 128; # this seems to be required for some vhosts
keepalive_timeout 65;
gzip on;
server {
listen 80;
server_name example.com
;
location / {
proxy_pass http://127.0.0.1:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location /static {
alias /var/www/user/static;
}
}
# include /etc/nginx/conf.d/*.conf;
}
I did not write this config but found this and edited to my use.
After our chat in comments, you told me you are having difficulties with using cmd and args.
Here is an example called two-containers.yaml:
apiVersion: v1
kind: Pod
metadata:
name: two-containers
spec:
restartPolicy: Never
containers:
- name: python
image: python
volumeMounts:
- name: shared-data
mountPath: /pod-data
command: ["/bin/sh"]
args: ["-c", "apt-get update && apt-get install -y curl && mkdir /curl-folder && cp /usr/bin/curl /curl-folder && cp -r /curl-folder /pod-data/"]
- name: user-nginx
image: nginx
volumeMounts:
- name: shared-data
mountPath: /tmp/pod-data
volumes:
- name: shared-data
emptyDir: {}
python will start up, run apt-get update then apt-get install -y curl then mkdir /curl-folder then copy usr/bin/curl to /curl-folder then copy the folder /curl-folder to /pod-data shared mounted volume.
A few observations:
The container image has to have the binary mentioned in command (like /bin/sh in python).
Try using && to chain commands consecutively in the args field it's easier to test and deploy.
Reproduction:
$ kubectl apply -f two-container-volume.yaml
pod/two-containers created
$ kubectl get pods -w
NAME READY STATUS RESTARTS AGE
two-containers 2/2 Running 0 7s
two-containers 1/2 NotReady 0 30s
$ kubectl describe pod two-containers
...
Containers:
python:
Container ID: docker://911462e67d7afab9bca6cdaea154f9229c80632efbfc631ddc76c3d431333193
Image: python
Command:
/bin/sh
Args:
-c
apt-get update && apt-get install -y curl && mkdir /curl-folder && cp /usr/bin/curl /curl-folder && cp -r /curl-folder /pod-data/
State: Terminated
Reason: Completed
Exit Code: 0
user-nginx:
State: Running
The python container executed and completed correctly, now let's check the files logging inside the nginx container.
$ kubectl exec -it two-containers -c user-nginx -- /bin/bash
root#two-containers:/# cd /tmp/pod-data/curl-folder/
root#two-containers:/tmp/pod-data/curl-folder# ls
curl
If you need further help, post the yaml with the command+args as you are trying to run and we can help you troubleshoot the syntax.
If it's a django app, consider using whitenoise http://whitenoise.evans.io/en/stable/ to serve your static content from minikube or kubernetes.
This is straightforward advice, but I had to search quite a bit before someone mentioned it.
Related
I have some services running on the cluster, and the ALB is working fine. I want to configure SSL communication from ALB/ingress to Keycloak17.0.1 by creating a self signed certificate and establish a communication to route through port 8443 instead of http(80). Keycloak is being built by the docker image. Docker compose file is exposing the port 8443.I should also make sure to have the keystore defined as a Kubernetes PVC within the deployment instead of a docker volume.
Below is deployment file:
---
apiVersion: "apps/v1"
kind: "Deployment"
metadata:
name: "keycloak"
namespace: "test"
spec:
volumes:
- name: keycloak-pv-volume
persistentVolumeClaim:
claimName: keycloak-pv-claim
spec:
selector:
matchLabels:
app: "keycloak"
replicas: 3
strategy:
type: "RollingUpdate"
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
minReadySeconds: 5
template:
metadata:
labels:
app: "keycloak"
spec:
containers:
-
name: "keycloak"
image: "quay.io/keycloak/keycloak:17.0.1"
imagePullPolicy: "Always"
livenessProbe:
httpGet:
path: /realms/master
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
readinessProbe:
tcpSocket:
port: 8080
initialDelaySeconds: 300
periodSeconds: 30
env:
-
name: "KEYCLOAK_USER"
value: "admin"
-
name: "KEYCLOAK_PASSWORD"
value: "admin"
-
name: "PROXY_ADDRESS_FORWARDING"
value: "true"
-
name: HTTPS_PROXY
value: "https://engineering-exodos*********:3128"
-
name: KC_HIDE_REALM_IN_ISSUER
value:************
ports:
- name: "http"
containerPort: 8080
- name: "https"
containerPort: 8443
self certificate is being created like below: (.groovy)
def secretPatch = 'kc alb-secret-patch.yaml'
sh loadMixins() + """
openssl req -newkey rsa:4096
-x509
-sha256
-days 395
-nodes
-out keycloak_alb.crt
-keyout keycloak_alb.key
-subj "/C=US/ST=MN/L=MN/O=Security/OU=IT Department/CN=www.gateway.com"
EXISTS=$(kubectl -n istio-system get secret --ignore-not-found keycloak_alb-secret)
if [ -z "$EXISTS" ]; then
kubectl -n create secret tls keycloak_alb-secret --key="keycloak_alb.key" --cert="keycloak_alb.crt"
else
# base64 needs the '-w0' flag to avoid wrapping long lines
echo -e "data:\n tls.key: $(base64 -w0 keycloak_alb.key)\n tls.crt: $(base64 -w0 keycloak.crt)" > ${secretPatch}
kubectl -n istio-system patch secret keycloak_alb-secret -p "$(cat ${secretPatch})"
fi
"""
}
Docker file:
FROM quay.io/keycloak/keycloak:17.0.1 as builder
ENV KC_METRICS_ENABLED=true
ENV KC_CACHE=ispn
ENV KC_DB=postgres
USER keycloak
RUN chmod -R 755 /opt/keycloak \
&& chown -R keycloak:keycloak /opt/keycloak
COPY ./keycloak-benchmark-dataset.jar /opt/keycloak/providers/keycloak-benchmark-dataset.jar
COPY ./ness-event-listener.jar /opt/keycloak/providers/ness-event-listener.jar
# RUN curl -o /opt/keycloak/providers/ness-event-listener-17.0.0.jar https://repo1.uhc.com/artifactory/repo/com/optum/dis/keycloak/ness_event_listener/17.0.0/ness-event-listener-17.0.0.jar
# Changes for hiding realm in issuer claim in access token
COPY ./keycloak-services-17.0.1.jar /opt/keycloak/lib/lib/main/org.keycloak.keycloak-services-17.0.1.jar
RUN /opt/keycloak/bin/kc.sh build`enter code here`
FROM quay.io/keycloak/keycloak:17.0.1
COPY --from=builder /opt/keycloak/lib/quarkus/ /opt/keycloak/lib/quarkus/
COPY --from=builder /opt/keycloak/providers /opt/keycloak/providers
COPY --from=builder /opt/keycloak/lib/lib/main/org.keycloak.keycloak-services-17.0.1.jar /opt/keycloak/lib/lib/main/org.keycloak.keycloak-services-17.0.1.jar
COPY --chown=keycloak:keycloak cache-ispn-remote.xml /opt/keycloak/conf/cache-ispn-remote.xml
COPY conf /opt/keycloak/conf/
# Elastic APM integration changes
USER root
RUN mkdir -p /opt/elastic/apm
RUN chmod -R 755 /opt/elastic/apm
RUN curl -L https://repo1.uhc.com/artifactory/Thirdparty-Snapshots/com/elastic/apm/agents/java/current/elastic-apm-agent.jar -o /opt/elastic/apm/elastic-apm-agent.jar
ENV ES_AGENT=" -javaagent:/opt/elastic/apm/elastic-apm-agent.jar"
ENV ELASTIC_APM_SERVICE_NAME="AIDE_007"
ENV ELASTIC_APM_SERVER_URL="https://nonprod.uhc.com:443"
ENV ELASTIC_APM_VERIFY_SERVER_CERT="false"
ENV ELASTIC_APM_ENABLED="true"
ENV ELASTIC_APM_LOG_LEVEL="WARN"
ENV ELASTIC_APM_ENVIRONMENT="Test-DIS"
CMD export JAVA_OPTS="$JAVA_OPTS $ES_AGENT"
USER keycloak
ENTRYPOINT ["/opt/keycloak/bin/kc.sh", "start"]
Docker Compose:
services:
keycloak:
image: quay.io/keycloak/keycloak:17.0.1
command: start-dev
environment:
KEYCLOAK_ADMIN: admin
KEYCLOAK_ADMIN_PASSWORD: admin
KC_HTTPS_CERTIFICATE_FILE: /opt/keycloak/tls.crt
KC_HTTPS_CERTIFICATE_KEY_FILE: /opt/keycloak/tls.key
ports:
- 8080:8080
- 8443:8443
volumes:
- ./localhost.crt:/opt/keycloak/conf/tls.crt
- ./localhost.key:/opt/keycloak/conf/tls.key
What is best standard way of practice to go ahead and route the traffic via SSL from ALB to Keycloak?
Bit complicated but will try to explain as much to give clarity any help much apricated,
I use azure devops to do deployment in eks using helm , everything working fine and i now have a requirement to add certificate to the pod.
for this i have a der file with me , which i should copy to the pods(replica 3) and do keytool to import the certificate and put that in an appropriate location before my application starts
My setup is i have a dockerfile and i call a shell script inside a docker file and do helm install using deployment.yml file
I now tried using configmap to mount the der file which is used to importcert and then i will execute some unix commands to import the certificate , the unix command is not working , can one some help here ?
docker file
FROM amazonlinux:2.0.20181114
RUN yum install -y java-1.8.0-openjdk-headless
ARG JAR_FILE='**/*.jar'
ADD ${JAR_FILE} car_service.jar
ADD entrypoint.sh .
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"] # split ENTRYPOINT wrapper from
CMD ["java", "-jar", "/car_service.jar"] # main CMD
entrypoint.sh
#!/bin/sh
# entrypoint.sh
# Check: $env_name must be set
if [ -z "$env_name" ]; then
echo '$env_name is not set; stopping' >&2
exit 1
fi
# Install aws client
yum -y install curl
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
yum -y install unzip
unzip awscliv2.zip
./aws/install
# Retrieve secrets from Secrets Manager
export KEYCLOAKURL=`aws secretsmanager get-secret-value --secret-id myathlon/$env_name/KEYCLOAKURL --query SecretString --output text`
cd /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.312.b07-1.amzn2.0.2.x86_64/jre/bin
keytool -noprompt -importcert -alias msfot-$(date +%Y%m%d-%H%M) -file /tmp/msfot.der -keystore msfot.jks -storepass msfotooling
mkdir /data/keycloak/
cp /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.312.b07-1.amzn2.0.2.x86_64/jre/bin/msfot.jks /data/keycloak/
cd /
# Run the main container CMD
exec "$#"
myconfigmap
create configmap msfot1 --from-file=msfot.der
my deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "helm-chart.fullname" . }}
labels:
app.kubernetes.io/name: {{ include "helm-chart.name" . }}
helm.sh/chart: {{ include "helm-chart.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app.kubernetes.io/name: {{ include "helm-chart.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ include "helm-chart.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
date: "{{ now | unixEpoch }}"
spec:
volumes:
- name: msfot1
configMap:
name: msfot1
items:
- key: msfot.der
path: msfot.der
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
volumeMounts:
- name: msfot1
mountPath: /tmp/msfot.der
subPath: msfot.der
ports:
- name: http
containerPort: 8080
protocol: TCP
env:
- name: env_name
value: {{ .Values.environmentName }}
- name: SPRING_PROFILES_ACTIVE
value: "{{ .Values.profile }}"
my values.yml file is
replicaCount: 3
#pass repository and targetPort values during runtime
image:
repository:
tag: "latest"
pullPolicy: Always
service:
type: ClusterIP
port: 80
targetPort:
profile: "aws"
environmentName: dev
i have 2 questions here
in my entrypoint.sh files keytool,mkdir,cp and cd command is not getting executed (so the Certificate is not getting added to keystore)
as you know this setup works for all the env as i use the same deployment.yml file though i have different values.yml file for each environment
i want this certificate generation to happen only in acc and prod not for dev and test
is their any other easy method doing this rather than configmap/deployment.yml ??
Please advice
Thanks
I'm trying to use docker-compose and kubernetes as two different solutions to setup a Django API served by Gunicorn (as the web server) and Nginx (as the reverse proxy). Here are the key files:
default.tmpl (nginx) - this is converted to default.conf when the environment variable is filled in:
upstream api {
server ${UPSTREAM_SERVER};
}
server {
listen 80;
location / {
proxy_pass http://api;
}
location /staticfiles {
alias /app/static/;
}
}
docker-compose.yaml:
version: '3'
services:
api-gunicorn:
build: ./api
command: gunicorn --bind=0.0.0.0:8000 api.wsgi:application
volumes:
- ./api:/app
api-proxy:
build: ./api-proxy
command: /bin/bash -c "envsubst < /etc/nginx/conf.d/default.tmpl > /etc/nginx/conf.d/default.conf && exec nginx -g 'daemon off;'"
environment:
- UPSTREAM_SERVER=api-gunicorn:8000
ports:
- 80:80
volumes:
- ./api/static:/app/static
depends_on:
- api-gunicorn
api-deployment.yaml (kubernetes):
apiVersion: apps/v1
kind: Deployment
metadata:
name: release-name-myapp-api-proxy
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: myapp-api-proxy
template:
metadata:
labels:
app.kubernetes.io/name: myapp-api-proxy
spec:
containers:
- name: myapp-api-gunicorn
image: "helm-django_api-gunicorn:latest"
imagePullPolicy: Never
command:
- "/bin/bash"
args:
- "-c"
- "gunicorn --bind=0.0.0.0:8000 api.wsgi:application"
- name: myapp-api-proxy
image: "helm-django_api-proxy:latest"
imagePullPolicy: Never
command:
- "/bin/bash"
args:
- "-c"
- "envsubst < /etc/nginx/conf.d/default.tmpl > /etc/nginx/conf.d/default.conf && exec nginx -g 'daemon off;'"
env:
- name: UPSTREAM_SERVER
value: 127.0.0.1:8000
volumeMounts:
- mountPath: /app/static
name: api-static-assets-on-host-mount
volumes:
- name: api-static-assets-on-host-mount
hostPath:
path: /Users/jonathan.metz/repos/personal/code-demos/kubernetes-demo/helm-django/api/static
My question involves the UPSTREAM_SERVER environment variable.
For docker-compose.yaml, the following values have worked for me:
Setting it to the name of the gunicorn service and the port it's running on (in this case api-gunicorn:8000). This is the best way to do it (and how I've done it in the docker-compose file above) because I don't need to expose the 8000 port to the host machine.
Setting it to MY_IP_ADDRESS:8000 as described in this SO post. This method requires me to expose the 8000 port, which is not ideal.
For api-deployment.yaml, only the following value has worked for me:
Setting it to localhost:8000. Inside of a pod, all containers can communicate using localhost.
Are there any other values for UPSTREAM_SERVER that work here, especially in the kubernetes file? I feel like I should be able to point to the container's name and that should work.
You could create a service to target container myapp-api-gunicorn but this will expose it outside of the pod:
apiVersion: v1
kind: Service
metadata:
name: api-gunicorn-service
spec:
selector:
app.kubernetes.io/name: myapp-api-proxy
ports:
- protocol: TCP
port: 8000
targetPort: 8000
You might also use hostname and subdomain fields inside a pod to take advantage of FQDN.
Currently when a pod is created, its hostname is the Pod’s metadata.name value.
The Pod spec has an optional hostname field, which can be used to specify the Pod’s hostname. When specified, it takes precedence over the Pod’s name to be the hostname of the pod. For example, given a Pod with hostname set to “my-host”, the Pod will have its hostname set to “my-host”.
The Pod spec also has an optional subdomain field which can be used to specify its subdomain. For example, a Pod with hostname set to “foo”, and subdomain set to “bar”, in namespace “my-namespace”, will have the fully qualified domain name (FQDN) “foo.bar.my-namespace.svc.cluster-domain.example”.
Also here is a nice article from Mirantis which talks about exposing multiple containers in a pod
I have a django container and an Ngix container. they work fine with docker-compose, and now im trying to use the images with kubernetes. Everything works fine, except the fact that the nginx container cannot connect to the uwsgi upstream. No response is being returned.
Here are my configuration:
# Nginx congifuration
upstream django {
server admin-api-app:8001 max_fails=20 fail_timeout=10s; # for a web port socket (we'll use this first),
}
server {
# the port your site will be served on
listen 80;
server_name server localhost my-website-domain.de;
charset utf-8;
location / {
uwsgi_pass django;
include /etc/nginx/uwsgi_params;
}
}
# Uwsgi file
module = site_module.wsgi
master = true
processes = 5
socket = :8001
enable-threads = true
vacuum=True
# Kubernetes
# Backend Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: container-backend
labels:
app: backend
spec:
replicas: 1
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
containers:
- name: container-backend
image: my-djangoimage:latest
command: ["./docker/entrypoint.sh"]
ports:
- containerPort: 8001
name: uwsgi
- name: nginx
image: my-nginx-image:latest
imagePullPolicy: Always
ports:
- containerPort: 80
name: http
---
# Backend Service
kind: Service
apiVersion: v1
metadata:
name: admin-api-app
spec:
selector:
app: backend
ports:
- port: 80
targetPort: 80
type: LoadBalancer
You probably need to change host in your django upstream because, as far as I understand, you want to connect to your django app located in the same pod where is nginx so try to change:
server admin-api-app:8001 max_fails=20 fail_timeout=10s;
to
server localhost:8001 max_fails=20 fail_timeout=10s;
Edit:
To make it work you need to change socket to http-socket but it can be painful/pointless as described here: Should I have separate containers for Flask, uWSGI, and nginx?
What is the easiest way to launch a celery beat and worker process in my django pod?
I'm migrating my Openshift v2 Django app to Openshift v3. I'm using Pro subscription. I'm really a noob on Openshift v3 and docker and containers and kubernetes. I have used this tutorial https://blog.openshift.com/migrating-django-applications-openshift-3/ to migrate my app (which works pretty well).
I'm now struggling on how to start celery. On Openshift 2 I just used an action hook post_start:
source $OPENSHIFT_HOMEDIR/python/virtenv/bin/activate
python $OPENSHIFT_REPO_DIR/wsgi/podpub/manage.py celery worker\
--pidfile="$OPENSHIFT_DATA_DIR/celery/run/%n.pid"\
--logfile="$OPENSHIFT_DATA_DIR/celery/log/%n.log"\
python $OPENSHIFT_REPO_DIR/wsgi/podpub/manage.py celery beat\
--pidfile="$OPENSHIFT_DATA_DIR/celery/run/celeryd.pid"\
--logfile="$OPENSHIFT_DATA_DIR/celery/log/celeryd.log" &
-c 1\
--autoreload &
It is a quite simple setup. It just uses the django database as a message broker. No rabbitMQ or something.
Would a openshift "job" be appropriated for that? Or better use powershift image (https://pypi.python.org/pypi/powershift-image) action commands? But I did not understand how to execute them.
here is the current deployment configuration for my only app "
apiVersion: v1
kind: DeploymentConfig
metadata:
annotations:
openshift.io/generated-by: OpenShiftNewApp
creationTimestamp: 2017-12-27T22:58:31Z
generation: 67
labels:
app: django
name: django
namespace: myproject
resourceVersion: "68466321"
selfLink: /oapi/v1/namespaces/myproject/deploymentconfigs/django
uid: 64600436-ab49-11e7-ab43-0601fd434256
spec:
replicas: 1
selector:
app: django
deploymentconfig: django
strategy:
activeDeadlineSeconds: 21600
recreateParams:
timeoutSeconds: 600
resources: {}
rollingParams:
intervalSeconds: 1
maxSurge: 25%
maxUnavailable: 25%
timeoutSeconds: 600
updatePeriodSeconds: 1
type: Recreate
template:
metadata:
annotations:
openshift.io/generated-by: OpenShiftNewApp
creationTimestamp: null
labels:
app: django
deploymentconfig: django
spec:
containers:
- image: docker-registry.default.svc:5000/myproject/django#sha256:6a0caac773acc65daad2e6ac87695f9f01ae3c99faba14536e0ec2b65088c808
imagePullPolicy: Always
name: django
ports:
- containerPort: 8080
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /opt/app-root/src/data
name: data
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- name: data
persistentVolumeClaim:
claimName: django-data
test: false
triggers:
- type: ConfigChange
- imageChangeParams:
automatic: true
containerNames:
- django
from:
kind: ImageStreamTag
name: django:latest
namespace: myproject
lastTriggeredImage: docker-registry.default.svc:5000/myproject/django#sha256:6a0caac773acc65daad2e6ac87695f9f01ae3c99faba14536e0ec2b65088c808
type: ImageChange
I'm using mod_wsgi-express and this is my app.sh
ARGS="$ARGS --log-to-terminal"
ARGS="$ARGS --port 8080"
ARGS="$ARGS --url-alias /static wsgi/static"
exec mod_wsgi-express start-server $ARGS wsgi/application
Help is very appreciated. Thank you
I have managed to get it working, though I'm not quite happy with it. I will move to a postgreSQL database very soon. Here is what I did:
wsgi_mod-express has an option called service-script which starts an additional process besides the actual app. So I updated my app.sh:
#!/bin/bash
ARGS=""
ARGS="$ARGS --log-to-terminal"
ARGS="$ARGS --port 8080"
ARGS="$ARGS --url-alias /static wsgi/static"
ARGS="$ARGS --service-script celery_starter scripts/startCelery.py"
exec mod_wsgi-express start-server $ARGS wsgi/application
mind the last ARGS=... line.
I created a python script that starts up my celery worker and beat.
startCelery.py:
import subprocess
OPENSHIFT_REPO_DIR="/opt/app-root/src"
OPENSHIFT_DATA_DIR="/opt/app-root/src/data"
pathToManagePy=OPENSHIFT_REPO_DIR + "/wsgi/podpub"
worker_cmd = [
"python",
pathToManagePy + "/manage.py",
"celery",
"worker",
"--pidfile="+OPENSHIFT_REPO_DIR+"/%n.pid",
"--logfile="+OPENSHIFT_DATA_DIR+"/celery/log/%n.log",
"-c 1",
"--autoreload"
]
print(worker_cmd)
subprocess.Popen(worker_cmd, close_fds=True)
beat_cmd = [
"python",
pathToManagePy + "/manage.py",
"celery",
"beat",
"--pidfile="+OPENSHIFT_REPO_DIR+"/celeryd.pid",
"--logfile="+OPENSHIFT_DATA_DIR+"/celery/log/celeryd.log",
]
print(beat_cmd)
subprocess.Popen(beat_cmd)
this was actually working, but I kept receiving a message when I tried to launch the celery worker saying
"Running a worker with superuser privileges when the worker accepts messages serialized with pickle is a very bad idea!
If you really want to continue then you have to set the C_FORCE_ROOT environment variable (but please think about this before you do)."
Eventhough I added these configurations to my settings.py in order to remove pickle serializer, it kept giving me that same error message.
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_ACEEPT_CONTENT = ['json']
I don't know why.
At the end I added C_FORCE_ROOT to my .s2i/enviroment
C_FORCE_ROOT=true
Now it's working, at least I thinks so. My next job will only run in some hours. I'm still open for any further suggestions and tipps.