This question already has answers here:
Are a WSGI server and HTTP server required to serve a Flask app?
(3 answers)
Closed 3 years ago.
I have simple Flask application (simply shows "Hello world"), I would like to deploy it on AWS Elastic BeanStalk. Multiple tutorial show deployment with nginx and gunicorn.
1) I don't understand why we need to use nginx, gunicorn is already a web-server to replace Flask build-in web server.
2) Tutorials show how to build two Docker containers: one for Flask and gunicorn and another for nginx. Why do I need two containers, can I package all in one? With two containers I cannot use Single Container Docker, I need to use Multicontainer Docker.
Any thoughts?
Usually in this trio nginx is used as reverse proxy.
It is possible to package flask+gunicorn+nginx in the same docker container:
For example:
FROM python:3.6.4
# Software version management
ENV NGINX_VERSION=1.13.8-1~jessie
ENV GUNICORN_VERSION=19.7.1
ENV GEVENT_VERSION=1.2.2
# Environment setting
ENV APP_ENVIRONMENT production
# Flask demo application
COPY ./app /app
RUN pip install -r /app/requirements.txt
# System packages installation
RUN echo "deb http://nginx.org/packages/mainline/debian/ jessie nginx" >> /etc/apt/sources.list
RUN wget https://nginx.org/keys/nginx_signing.key -O - | apt-key add -
RUN apt-get update && apt-get install -y nginx=$NGINX_VERSION
&& rm -rf /var/lib/apt/lists/*
# Nginx configuration
RUN echo "daemon off;" >> /etc/nginx/nginx.conf
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx.conf /etc/nginx/conf.d/nginx.conf
# Gunicorn installation
RUN pip install gunicorn==$GUNICORN_VERSION gevent==$GEVENT_VERSION
# Gunicorn default configuration
COPY gunicorn.config.py /app/gunicorn.config.py
WORKDIR /app
EXPOSE 80 443
CMD ["nginx", "-g", "daemon off;"]
Related
I am trying to Deploy Two Django apps with single AWS EC2 Instance having same IP.
But it always failed when I have added the second App.sock and test Supervisor.
I fond some body asked similar question before. but Not answered properly, and my use case is little different. ( Run multiple django project with nginx and gunicorn )
I have followed these steps:
.
Cloned my project from Git *
pip install -r requiernments.txt
pip3 install gunicorn
sudo apt-get install nginx -y
sudo apt-get install supervisor -y
cd /etc/supervisor/conf.d
sudo touch testapp2.conf
sudo nano testapp2.conf
Updated config file same as below
[program:gunicorn]
directory=/home/ubuntu/projects/testapp2/testerapp
command=/home/ubuntu/projects/testapp2/venv/bin/gunicorn --workers 3 --bind unix:/home/ubuntu/projects/testapp2/testerapp/app.sock testerapp.wsgi:application
autostart=true
autorestart=true
stderr_logfile=/home/ubuntu/projects/testapp2/log/gunicorn.err.log
stdout_logfile=/home/ubuntu/projects/testapp2/log/gunicorn.out.log
[group:guni]
programs:gunicorn
*----------
sudo supervisorctl reread
sudo supervisorctl update
sudo supervisorctl status
The below steps will work and Site available on browser if there is only One configuration above. But when i have added an additional configuration, it shows 502 Bad Gateway on the Browser. Please help me to solve this issue.
You can add one more config file in supervisor conf.d and use different port numbers in the different Django apps.
So I'm trying to run Django developing server on a container but I can't access it through my browser. I have 2 containers using the same docker network, one with postgress and the other is Django. I manage to ping both containers and successfully connect 2 of them together and run ./manage.py runserver ok but can't curl or open it in a browser
Here is my Django docker file
FROM alpine:latest
COPY ./requirements.txt .
ADD ./parking/ /parking
RUN apk add --no-cache --virtual .build-deps python3-dev gcc py3-pip postgresql-dev py3-virtualenv musl-dev libc-dev linux-headers
RUN virtualenv /.env
RUN /.env/bin/pip install -r /requirements.txt
WORKDIR /parking
EXPOSE 8000 5432
The postgres container I pulled it from docker hub
I ran django with
docker run --name=django --network=app -p 127.4.3.1:6969:8000 -it dev/django:1.0
I ran postgres with
docker run --name=some-postgres --network=app -p 127.2.2.2:6969:5432 -e POSTGRES_PASSWORD=123 -e POSTGRES_DB=parking postgres
Any help would be great. Thank you
I think you forget to add the command to run your application at the and of the dockerfile, when you run this image it just start the virtualenv and install all python dependencies at the requirements.txt, but the django application was not started. You need put at the end something like
CMD "python parking/manage.py runserver"
this will make your container still running at the choosed port and make you application accessible at 127.4.3.1:6969:8000.
Okay so I manage to figure it out, I have looked at #leonardo-alves-dos-santos answer and I come to the conclusion that I run CMD "python parking/manage.py runserver 0.0.0.0:8000" . Now I can access my Django app with Django container port 127.4.3.1:6969 and 172.18.0.2:8000 from docker network
I've created a docker image for django rest project, with following Dockerfile and docker-compose file,
Dockerfile
FROM python:3
# Set environment variables
ENV PYTHONUNBUFFERED 1
COPY requirements.txt /
# Install dependencies.
RUN pip install -r /requirements.txt
# Set work directory.
RUN mkdir /app
WORKDIR /app
# Copy project code.
COPY . /app/
EXPOSE 8000
docker-compose file
version: "3"
services:
dj:
container_name: dj
build: django
command: python manage.py runserver 0.0.0.0:8000
volumes:
- ./django:/app
ports:
- "8000:8000"
And docker-compose up command bring up the server like this,
but in web browser i can't access the server, browser says ERR_ADDRESS_INVALID
Docker version 18.09.2
0.0.0.0 is IPv4 for "everywhere"; you can't usually make outbound connections to it. If you have a Docker Desktop application, try http://localhost:8000; if it's Docker Toolbox, you'll need the docker-machine ip address, usually http://192.168.99.100:8000.
thanks to David Maze problem is solved.
I'm trying to add Docker support to my project.
My structure looks like this:
front/Dockerfile
back/Dockerfile
docker-compose.yml
My Dockerfile for django:
FROM ubuntu:18.04
RUN apt-get update && apt-get install -y python-software-properties software-properties-common
RUN add-apt-repository ppa:ubuntugis/ubuntugis-unstable
RUN apt-get update && apt-get install -y python3 python3-pip binutils libproj-dev gdal-bin python3-gdal
ENV APPDIR=/code
WORKDIR $APPDIR
ADD ./back/requirements.txt /tmp/requirements.txt
RUN ./back/pip3 install -r /tmp/requirements.txt
RUN ./back/rm -f /tmp/requirements.txt
CMD $APPDIR/run-django.sh
My Dockerfile for Vue.js:
FROM node:9.11.1-alpine
# install simple http server for serving static content
RUN npm install -g http-server
# make the 'app' folder the current working directory
WORKDIR /app
# copy both 'package.json' and 'package-lock.json' (if available)
COPY package*.json ./
# install project dependencies
RUN npm install
# copy project files and folders to the current working directory (i.e. 'app' folder)
COPY . .
# build app for production with minification
RUN npm run build
EXPOSE 8080
CMD [ "http-server", "dist" ]
and my docker-compose.yml:
version: '2'
services:
rabbitmq:
image: rabbitmq
api:
build:
context: ./back
environment:
- DJANGO_SECRET_KEY=${SECRET_KEY}
volumes:
- ./back:/app
rabbit1:
image: "rabbitmq:3-management"
hostname: "rabbit1"
ports:
- "15672:15672"
- "5672:5672"
labels:
NAME: "rabbitmq1"
volumes:
- "./enabled_plugins:/etc/rabbitmq/enabled_plugins"
django:
extends:
service: api
command:
./back/manage.py runserver
./back/uwsgi --http :8081 --gevent 100 --module websocket --gevent-monkey-patch --master --processes 4
ports:
- "8000:8000"
volumes:
- ./backend:/app
vue:
build:
context: ./front
environment:
- HOST=localhost
- PORT=8080
command:
bash -c "npm install && npm run dev"
volumes:
- ./front:/app
ports:
- "8080:8080"
depends_on:
- django
Running docker-compose fails with:
ERROR: for chatapp2_django_1 Cannot start service django: b'OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \\"./back/manage.py\\": stat ./back/manage.py: no such file or directory": unknown'
ERROR: for rabbit1 Cannot start service rabbit1: b'driver failed programming external connectivity on endpoint chatapp2_rabbit1_1 (05ff4e8c0bc7f24216f2fc960284ab8471b47a48351731df3697c6d041bbbe2f): Error starting userland proxy: listen tcp 0.0.0.0:15672: bind: address already in use'
ERROR: for django Cannot start service django: b'OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \\"./back/manage.py\\": stat ./back/manage.py: no such file or directory": unknown'
ERROR: Encountered errors while bringing up the project.
I don't understand what is this 'unknown' directory it's trying to get. Have I set this all up right for my project structure?
For the django part you're missing a copy of your code for the django app which im assuming is in back. You'll need to add ADD /back /code. You probably also wanna probably run the python alpine docker build instead of the ubuntu as it will significantly reduce build times and container size.
This is what I would do:
# change this to whatever python version your app is targeting (mine is typically 3.6)
FROM python:3.6-alpine
ADD /back /code
# whatever other dependencies you'll need i run with the psycopg2-binary build so i need these (the nice part of the python-alpine image is you don't need to install any of those python specific packages you were installing before
RUN apk add --virtual .build-deps gcc musl-dev postgresql-dev
RUN pip install -r /code/requirements.txt
# expose whatever port you need for your Django app (default is 8000, we use non-default but you can do whatever you need)
EXPOSE 8000
WORKDIR /code
#dont need /code here since WORKDIR is effectively a change directory
RUN chmod +x /run-django.sh
RUN apk add --no-cache bash postgresql-libs
CMD ["/run-django.sh"]
We have a similar run-django.sh script that we call python manage.py makemigrations and python manage.py migrate. I'm assuming yours is similar.
Long story short, you weren't copying in the code from back to code.
Also in your docker-compose you dont have build context like you do for the vue service.
As for your rabbitmq container failure, you need to stop the /etc service associated with rabbit on your computer. I get this error if i'm trying to expose a postgresql container or a redis container and have to /etc/init.d/postgresql stop or /etc/init.d/redis stop to stop the service running on your machine in order to allow for no collisions on that default port for that service.
I am trying to serve a Django application with uWSGI from Docker. I am using supervisord to start the process for me at the end of the Dockerfile. When I run the image, it says that the uWSGI process starts and succeeds, but I'm unable to view the application at the URL I thought would display it. Perhaps I do not have things set up/configured correctly?
I am not having supervisord start nginx right now because I am currently serving static files via Amazon S3, and want to first focus on getting the wsgi up and running.
I am successful in running the application using uwsgi locally by doing uwsgi --init uwsgi.ini:local, but I am having trouble moving it into docker.
Here is my Dockerfile
FROM ubuntu:14.04
# Get most recent apt-get
RUN apt-get -y update
# Install python and other tools
RUN apt-get install -y tar git curl nano wget dialog net-tools build-essential
RUN apt-get install -y python3 python3-dev python-distribute
RUN apt-get install -y nginx supervisor
# Get Python3 version of pip
RUN apt-get -y install python3-setuptools
RUN easy_install3 pip
RUN pip install uwsgi
RUN apt-get install -y python-software-properties
# Install GEOS
RUN apt-get -y install binutils libproj-dev gdal-bin
# Install node.js
RUN apt-get install -y nodejs npm
# Install postgresql dependencies
RUN apt-get update && \
apt-get install -y postgresql libpq-dev && \
rm -rf /var/lib/apt/lists
ADD . /home/docker/code
# Setup config files
RUN echo "daemon off;" >> /etc/nginx/nginx.conf
RUN rm /etc/nginx/sites-enabled/default
RUN ln -s /home/docker/code/nginx-app.conf /etc/nginx/sites-enabled/
RUN ln -s /home/docker/code/supervisor-app.conf /etc/supervisor/conf.d/
RUN pip install -r /home/docker/code/app/requirements.txt
EXPOSE 8080
CMD ["supervisord", "-c", "/home/docker/code/supervisor-app.conf", "-n"]
And here is my uwsgi.ini
[uwsgi]
# this config will be loaded if nothing specific is specified
# load base config from below
ini = :base
# %d is the dir this configuration file is in
socket = %dmy_app.sock
master = true
processes = 4
[dev]
ini = :base
# socket (uwsgi) is not the same as http, nor http-socket
socket = :8001
[local]
ini = :base
http = :8000
# set the virtual env to use
home=/Users/my_user/.virtualenvs/my_env
[base]
# chdir to the folder of this config file, plus app/website
chdir = %dmy_app/
# load the module from wsgi.py, it is a python path from
# the directory above.
module=my_app.wsgi:application
# allow anyone to connect to the socket. This is very permissive
chmod-socket=666
http = :8080
And here is my supervisor-app.conf file
[program:app-uwsgi]
command = /usr/local/bin/uwsgi --ini /home/docker/code/uwsgi.ini
From a MAC using boot2docker, I am trying to access the application at $(boot2docker ip):8080
Ultimately I want to upload this container to AWS Elastic Beanstalk, with not only a uWSGI process running, but a celery worker running as well.
When I run my container, I can see from the logs that both supervisor and uwsgi successfully start. I was able to get things running on my local machine both using uwsgi by itself and uwsgi through supervisor, but for some reason when I containerize the thing I can't find it anywhere.
Here is what is logged when I boot up the docker container
2014-12-25 15:08:03,950 CRIT Supervisor running as root (no user in config file)
2014-12-25 15:08:03,953 INFO supervisord started with pid 1
2014-12-25 15:08:04,957 INFO spawned: 'uwsgi' with pid 9
2014-12-25 15:08:05,970 INFO success: uwsgi entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
How are you starting the docker container?
I don't see any CMD or ENTRYPOINT script, so I'm unclear as to how anything is getting started.
In general, I would advise avoiding things like supervisord unless absolutely necessary, just start uWSGI in the foreground from a CMD line. Try adding the following as the last line in the Dockerfile:
CMD ["/usr/local/bin/uwsgi", "--ini", "/home/docker/code/uwsgi.ini"]
and then just run with docker run -p 8000:8000 image_name. You should get some reply from uWSGI. If that works, I recommend you move the other services (postgres, node, to separate containers). There are official images for Node, Python and Postgres which should save you some time.
Remember, Docker containers only run as long as their main process (which must be in the foreground).