Using variable interpolation in Gunicorn inside DockerFile - flask

Looking for a way to pass PORT environment variable to gunicorn command inside DockerFile
current setup
.env
PORT=8080
DockerFile
EXPOSE ${PORT}
CMD ["gunicorn" , "--timeout" , "120" ,"-b", "0.0.0.0:8080", "wsgi:app"]
First attempt
CMD ["gunicorn" , "--timeout" , "120" ,"-b", "0.0.0.0:${PORT}", "wsgi:app"]
Failed
Second attempt
CMD ["gunicorn" , "--timeout" , "120" ,"-b", "0.0.0.0:{PORT}", "wsgi:app"]
Failed
What is the correct way to pass PORT to gunicorn?
Update:
I can run the command in bash successfully
#!/bin/bash
PORT=8879
SERVER_PORT=0.0.0.0:${PORT}
echo ${SERVER_PORT}
gunicorn --bind ${SERVER_PORT} wsgi:app

Related

uWSGI error every time I use the requests module in Django app on Docker

I'm running a Django app with uWSGI in Docker with docker-compose. I get the same error every time I:
Send a POST request with AJAX
In handling said request in my view, I use python's requests module, i.e. r = requests.get(some_url)
uWSGI says the following:
!!! uWSGI process 13 got Segmentation Fault !!!
DAMN ! worker 1 (pid: 13) died :( trying respawn ...
Respawned uWSGI worker 1 (new pid: 24)
spawned 4 offload threads for uWSGI worker 1
The console in the browser says net::ERR_EMPTY_RESPONSE
I've tried using the requests module in different places, and wherever I put it I get the same Segmentation Fault error. I'm also able to run everything fine outside of docker with no errors, so I've narrowed it down to: docker + requests module = errror.
Is there something that could be blocking the requests sent with the requests module from within the docker container? Thanks in advance for your help.
Here's my uwsgi.ini file:
[uwsgi]
chdir = %d
module = my_project.wsgi:application
master = true
processes = 2
http = 0.0.0.0:8000
vacuum = true
pidfile = /tmp/my_project.pid
daemonize = %d/my_project.log
check-static = %d
static-expires = /* 7776000
offload-threads = %k
uid = 1000
gid = 1000
# there is no /etc/mime.types on the docker Arch Linux image
mime-file = %d/mime.types
Dockerfile:
FROM alpine:3.8
ENV PYTHONUNBUFFERED 1
RUN mkdir /my_project
WORKDIR /my_project
RUN apk add build-base python3-dev py3-pip python3
# deps for python cryptography
RUN apk add libffi-dev musl-dev openssl-dev
# dep for uwsgi
RUN apk add linux-headers
ADD requirements.txt /my_project/
RUN pip3 install -r requirements.txt
ADD . /my_project/
ENTRYPOINT ./start.sh
docker-compose.yml:
version: '3'
services:
web:
build: .
entrypoint: ./start.sh
volumes:
- .:/my_project
ports:
- "8000:8000"
environment:
- DEBUG_LEVEL=INFO
network_mode: "host"
start.sh:
#!/bin/sh
echo '' > logfile.log
uwsgi --ini uwsgi.ini
tail -f logfile.log
Solution: Change base image to Ubuntu 16.04 and everything works fine now.

How to config supervisor with Django channels and server daphne

I have a problem with my configuration supervisor, my file is in etc/supervisor/conf.d/realtimecolonybit.conf,
When I try command supervisorctl reread, show me the "No config updates to processes" and when I try the other command like this
supervisorctl status realtimecolonybit
Shows me this error
realtimecolonybit FATAL can't find command '/home/ubuntu/realtimecolonybit/bin/start.sh;'
And when try the supervisorctl start realtimecolonybit
show me this error
realtimecolonybit: ERROR (no such file)
My configuration in my file realtimecolonybit.conf is below
[program:realtimecolonybit]
command = /home/ubuntu/realtimecolonybit/bin/start.sh;
user = root
stdout_logfile = /home/ubuntu/realtimecolonybit/logs/realtimecolonybit.log;
redirect_strderr = true;
My configuration from my file start.sh is below
#!/bin/bash
NAME="realtimecolonybit"
DJANGODIR=/home/ubuntu/realtimecolonybit/colonybit
SOCKFILE=/home/ubuntu/realtimecolonybit/run/gunicorn.sock
USER=root
GROUP=root
NUM_WORKERS=3
DJANGO_SETTINGS_MODULE=colonybit.settings
echo "Starting $NAME as `whoami`"
cd $DJANGODIR
source /home/ubuntu/realtimecolonybit/bin/activate
# workon realtimecolonybit
export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE
export PYTHONPAHT=$DJANGODIR:$PYTHONPATH
RUNDIR=$(dirname $SOCKFILE)
test -d $RUNDIR || mkdir -p $RUNDIR
exec daphne -b 0.0.0.0 -p 8001 colonybit.asgi:application
When I run without supervisor like this
(realtimecolonybit)realtimecolonybit/#/ ./bin/start.sh
it's run ok and working well, but sometimes down the server
I try to run a Django 1.11 and django_channel with supervisor my app is in aws.
I solve my problem, the error was in file .conf, I remove ; and remove the .sh, change the start.sh to start
wrong command
command = /home/ubuntu/realtimecolonybit/bin/start.sh;
correct command
command = /home/ubuntu/realtimecolonybit/bin/start

Script execution using ansible [duplicate]

I am using Ansible to deploy my project and I trying to check if an specified package is installed, but I have a problem with it task, here is the task:
- name: Check if python-apt is installed
command: dpkg -l | grep python-apt
register: python_apt_installed
ignore_errors: True
And here is the problem:
$ ansible-playbook -i hosts idempotent.yml
PLAY [lxc-host] ***************************************************************
GATHERING FACTS ***************************************************************
ok: [10.0.3.240]
TASK: [idempotent | Check if python-apt is installed] *************************
failed: [10.0.3.240] => {"changed": true, "cmd": ["dpkg", "-l", "|", "grep", "python-apt"], "delta": "0:00:00.015524", "end": "2014-07-10 14:41:35.207971", "rc": 2, "start": "2014-07-10 14:41:35.192447"}
stderr: dpkg-query: error: package name in specifier '|' is illegal: must start with an alphanumeric character
...ignoring
PLAY RECAP ********************************************************************
10.0.3.240 : ok=2 changed=1 unreachable=0 failed=0
Why is illegal this character '|' .
From the doc:
command - Executes a command on a remote node
The command module takes the command name followed by a list of
space-delimited arguments. The given command will be executed on all
selected nodes. It will not be processed through the shell, so
variables like $HOME and operations like "<", ">", "|", and "&" will
not work (use the shell module if you need these features).
shell - Executes a commands in nodes
The shell module takes the command name followed by a list of space-delimited arguments.
It is almost exactly like the command module but runs the command
through a shell (/bin/sh) on the remote node.
Therefore you have to use shell: dpkg -l | grep python-apt.
read about the command module in the Ansible documentation:
It will not be processed through the shell, so .. operations like "<", ">", "|", and "&" will not work
As it recommends, use the shell module:
- name: Check if python-apt is installed
shell: dpkg -l | grep python-apt
register: python_apt_installed
ignore_errors: True
For what it's worth, you can check/confirm the installation in a debian environment using the apt command:
- name: ensure python-apt is installed
apt: name=python-apt state=present

Superset in production

I've been trying to work out how best to productionise superset, or at least getting it running in a daemon. I created a SystemD service with the following:
[Unit]
Description=Superset
[Service]
Type=simple
WorkingDirectory=/home/XXXX/Documents/superset/venv
ExecStart=/home/XXXX/Documents/superset/venv/bin/superset runserver
[Install]
WantedBy=multi-user.target
And the last error I got to was gunicorn cannot be found. I don't know what else I am missing or is there another way to set it up?
I was able to set it up, after a bunch of searching and trial and error with supervisor, which is a python 2 program, but you can run any command (including other python version in other virtual environments).
I'm running it on an ubuntu 16 VPS. After creating an environment and installing supervisor, you create a configuration file and mine looks like this:
[supervisord]
logfile = %(ENV_HOME)s/sdaprod/supervisor/supervisord.log
logfile_maxbytes = 50MB
logfile_backups=10
loglevel = info
pidfile = %(ENV_HOME)s/sdaprod/supervisor/supervisord.pid
nodaemon = false
minfds = 1024
minprocs = 200
umask = 022
identifier = supervisor
directory = %(ENV_HOME)s/sdaprod/supervisor
nocleanup = true
childlogdir = %(ENV_HOME)s/sdaprod/supervisor
strip_ansi = false
[unix_http_server]
file=/tmp/supervisor.sock
chmod = 0777
[supervisorctl]
serverurl=unix:///tmp/supervisor.sock
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[program:superset]
command = %(ENV_HOME)s/miniconda3/envs/superset/bin/superset runserver
directory = %(ENV_HOME)s/sdaprod
environment = PATH='%(ENV_PATH)s:%(ENV_HOME)s/miniconda3/envs/superset/bin',PYTHONPATH='%(ENV_PYTHONPATH)s:%(ENV_HOME)s/sdacore:%(ENV_HOME)s/sdaprod'
And then you just run supervisord from an environment that has it installed
The %(ENV_<>)s are environment variables. This is my first time doing this, so I absolutely can not vouch for this approach's efficiency, but it does work.

How to run a celery worker with Django app scalable by AWS Elastic Beanstalk?

How to use Django with AWS Elastic Beanstalk that would also run tasks by celery on main node only?
This is how I set up celery with django on elastic beanstalk with scalability working fine.
Please keep in mind that 'leader_only' option for container_commands works only on environment rebuild or deployment of the App. If service works long enough, leader node may be removed by Elastic Beanstalk. To deal with that, you may have to apply instance protection for your leader node. Check: http://docs.aws.amazon.com/autoscaling/latest/userguide/as-instance-termination.html#instance-protection-instance
Add bash script for celery worker and beat configuration.
Add file root_folder/.ebextensions/files/celery_configuration.txt:
#!/usr/bin/env bash
# Get django environment variables
celeryenv=`cat /opt/python/current/env | tr '\n' ',' | sed 's/export //g' | sed 's/$PATH/%(ENV_PATH)s/g' | sed 's/$PYTHONPATH//g' | sed 's/$LD_LIBRARY_PATH//g' | sed 's/%/%%/g'`
celeryenv=${celeryenv%?}
# Create celery configuraiton script
celeryconf="[program:celeryd-worker]
; Set full path to celery program if using virtualenv
command=/opt/python/run/venv/bin/celery worker -A django_app --loglevel=INFO
directory=/opt/python/current/app
user=nobody
numprocs=1
stdout_logfile=/var/log/celery-worker.log
stderr_logfile=/var/log/celery-worker.log
autostart=true
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600
; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true
; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=998
environment=$celeryenv
[program:celeryd-beat]
; Set full path to celery program if using virtualenv
command=/opt/python/run/venv/bin/celery beat -A django_app --loglevel=INFO --workdir=/tmp -S django --pidfile /tmp/celerybeat.pid
directory=/opt/python/current/app
user=nobody
numprocs=1
stdout_logfile=/var/log/celery-beat.log
stderr_logfile=/var/log/celery-beat.log
autostart=true
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600
; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true
; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=998
environment=$celeryenv"
# Create the celery supervisord conf script
echo "$celeryconf" | tee /opt/python/etc/celery.conf
# Add configuration script to supervisord conf (if not there already)
if ! grep -Fxq "[include]" /opt/python/etc/supervisord.conf
then
echo "[include]" | tee -a /opt/python/etc/supervisord.conf
echo "files: celery.conf" | tee -a /opt/python/etc/supervisord.conf
fi
# Reread the supervisord config
supervisorctl -c /opt/python/etc/supervisord.conf reread
# Update supervisord in cache without restarting all services
supervisorctl -c /opt/python/etc/supervisord.conf update
# Start/Restart celeryd through supervisord
supervisorctl -c /opt/python/etc/supervisord.conf restart celeryd-beat
supervisorctl -c /opt/python/etc/supervisord.conf restart celeryd-worker
Take care about script execution during deployment, but only on main node (leader_only: true).
Add file root_folder/.ebextensions/02-python.config:
container_commands:
04_celery_tasks:
command: "cat .ebextensions/files/celery_configuration.txt > /opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh && chmod 744 /opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh"
leader_only: true
05_celery_tasks_run:
command: "/opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh"
leader_only: true
Beat is configurable without need of redeployment, with separate django applications: https://pypi.python.org/pypi/django_celery_beat.
Storing task results is good idea to: https://pypi.python.org/pypi/django_celery_beat
File requirements.txt
celery==4.0.0
django_celery_beat==1.0.1
django_celery_results==1.0.1
pycurl==7.43.0 --global-option="--with-nss"
Configure celery for Amazon SQS broker
(Get your desired endpoint from list: http://docs.aws.amazon.com/general/latest/gr/rande.html)
root_folder/django_app/settings.py:
...
CELERY_RESULT_BACKEND = 'django-db'
CELERY_BROKER_URL = 'sqs://%s:%s#' % (aws_access_key_id, aws_secret_access_key)
# Due to error on lib region N Virginia is used temporarily. please set it on Ireland "eu-west-1" after fix.
CELERY_BROKER_TRANSPORT_OPTIONS = {
"region": "eu-west-1",
'queue_name_prefix': 'django_app-%s-' % os.environ.get('APP_ENV', 'dev'),
'visibility_timeout': 360,
'polling_interval': 1
}
...
Celery configuration for django django_app app
Add file root_folder/django_app/celery.py:
from __future__ import absolute_import, unicode_literals
import os
from celery import Celery
# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'django_app.settings')
app = Celery('django_app')
# Using a string here means the worker don't have to serialize
# the configuration object to child processes.
# - namespace='CELERY' means all celery-related configuration keys
# should have a `CELERY_` prefix.
app.config_from_object('django.conf:settings', namespace='CELERY')
# Load task modules from all registered Django app configs.
app.autodiscover_tasks()
Modify file root_folder/django_app/__init__.py:
from __future__ import absolute_import, unicode_literals
# This will make sure the app is always imported when
# Django starts so that shared_task will use this app.
from django_app.celery import app as celery_app
__all__ = ['celery_app']
Check also:
How do you run a worker with AWS Elastic Beanstalk? (solution without scalability)
Pip Requirements.txt --global-option causing installation errors with other packages. "option not recognized" (solution for problems coming from obsolate pip on elastic beanstalk that cannto deal with global options for properly solving pycurl dependency)
This is how I extended the answer by #smentek to allow for multiple worker instances and a single beat instance - same thing applies where you have to protect your leader. (I still don't have an automated solution for that yet).
Please note that envvar updates to EB via the EB cli or the web interface are not relflected by celery beat or workers until app server restart has taken place. This caught me off guard once.
A single celery_configuration.sh file outputs two scripts for supervisord, note that celery-beat has autostart=false, otherwise you end up with many beats after an instance restart:
# get django environment variables
celeryenv=`cat /opt/python/current/env | tr '\n' ',' | sed 's/export //g' | sed 's/$PATH/%(ENV_PATH)s/g' | sed 's/$PYTHONPATH//g' | sed 's/$LD_LIBRARY_PATH//g' | sed 's/%/%%/g'`
celeryenv=${celeryenv%?}
# create celery beat config script
celerybeatconf="[program:celeryd-beat]
; Set full path to celery program if using virtualenv
command=/opt/python/run/venv/bin/celery beat -A lexvoco --loglevel=INFO --workdir=/tmp -S django --pidfile /tmp/celerybeat.pid
directory=/opt/python/current/app
user=nobody
numprocs=1
stdout_logfile=/var/log/celery-beat.log
stderr_logfile=/var/log/celery-beat.log
autostart=false
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 10
; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true
; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=998
environment=$celeryenv"
# create celery worker config script
celeryworkerconf="[program:celeryd-worker]
; Set full path to celery program if using virtualenv
command=/opt/python/run/venv/bin/celery worker -A lexvoco --loglevel=INFO
directory=/opt/python/current/app
user=nobody
numprocs=1
stdout_logfile=/var/log/celery-worker.log
stderr_logfile=/var/log/celery-worker.log
autostart=true
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600
; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true
; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=999
environment=$celeryenv"
# create files for the scripts
echo "$celerybeatconf" | tee /opt/python/etc/celerybeat.conf
echo "$celeryworkerconf" | tee /opt/python/etc/celeryworker.conf
# add configuration script to supervisord conf (if not there already)
if ! grep -Fxq "[include]" /opt/python/etc/supervisord.conf
then
echo "[include]" | tee -a /opt/python/etc/supervisord.conf
echo "files: celerybeat.conf celeryworker.conf" | tee -a /opt/python/etc/supervisord.conf
fi
# reread the supervisord config
/usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf reread
# update supervisord in cache without restarting all services
/usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf update
Then in container_commands we only restart beat on leader:
container_commands:
# create the celery configuration file
01_create_celery_beat_configuration_file:
command: "cat .ebextensions/files/celery_configuration.sh > /opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh && chmod 744 /opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh && sed -i 's/\r$//' /opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh"
# restart celery beat if leader
02_start_celery_beat:
command: "/usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf restart celeryd-beat"
leader_only: true
# restart celery worker
03_start_celery_worker:
command: "/usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf restart celeryd-worker"
If someone is following smentek's answer and getting the error:
05_celery_tasks_run: /usr/bin/env bash does not exist.
know that, if you are using Windows, your problem might be that the "celery_configuration.txt" file has WINDOWS EOL when it should have UNIX EOL. If using Notepad++, open the file and click on "Edit > EOL Conversion > Unix (LF)". Save, redeploy, and error is no longer there.
Also, a couple of warnings for really-amateur people like me:
Be sure to include "django_celery_beat" and "django_celery_results" in your "INSTALLED_APPS" in settings.py file.
To check celery errors, connect to your instance with "eb ssh" and then "tail -n 40 /var/log/celery-worker.log" and "tail -n 40 /var/log/celery-beat.log" (where "40" refers to the number of lines you want to read from the file, starting from the end).
Hope this helps someone, it would've saved me some hours!