I have a Plotly Dash application in a Docker container deployed with Elastic Beanstalk. Everything looks and runs fine, except when I run a process that takes a long time to complete. The longer processes will run, but then when a graph should be populated it does not return any graph at all. I can see in the logs that the operation is running, but the graph is not populated unless the process is shorter (< 45s approx).
I am using Amazon Linux 2 Docker + classic load balancer + nginx.
Dockerfile:
FROM python:3.9
ENV DASH_DEBUG_MODE False
COPY . /app
WORKDIR /app
RUN set -ex && \
pip install -r requirements.txt
EXPOSE 8050
CMD gunicorn -w 4 --timeout 500 -b 0.0.0.0:8050 application:server
I've tried with CMD ["python", "application.py"] as well.
I've tried using .ebextensions and .platform to modify options.config and nginx.conf but neither have worked.
Elastic Beanstalk also uses gunicorn which overrides the gunicorn in the Dockerfile.
You have to add a Procfile in the root of your app directory.
web: gunicorn --bind :8000 --workers 3 --threads 2 --timeout 500 project.wsgi:application
Related
I am trying to deploy a fairly simple Python web app with FastAPI and Gunicorn on Google Cloud Run with a Docker container following this tutorial and upon deploying I keep falling on the same error:
Invalid ENTRYPOINT. [name: "gcr.io/<project_id>/<image_name>#sha256:xxx" error: "Invalid command \"/bin/sh\": file not found" ].
It works fine to build the image and push it to the Container Registry.
On the cloud run I have set my secrets for database connection and I am passing as an argument to the Dockerfile which settings.py file to use for the production environment, as I did locally to build/run the container.
Any idea on what am I missing or doing wrong in the process? It's my first attempt to deploying a web app on a cloud service so I might not have all the concepts on point just yet.
Dockerfile
FROM ubuntu:latest
ENV PYTHONUNBUFFERED 1
RUN apt update && apt upgrade -y
RUN apt install -y -q build-essential python3-pip python3-dev
RUN pip3 install -U pip setuptools wheel
RUN pip3 install gunicorn uvloop httptools
COPY requirements.txt /code/requirements.txt
RUN pip3 install -r /code/requirements.txt
COPY . code/
# Pass the settings_module as an argument when building the image
ARG SETTINGS_MODULE
ENV SETTINGS_MODULE $SETTINGS_MODULE
EXPOSE $PORT
CMD exec /usr/local/bin/gunicorn -b :$PORT -w 4 -k uvicorn.workers.UvicornWorker app.main:app --chdir /code
cloudbuild.yaml
steps:
- name: gcr.io/cloud-builders/docker
args: ["build", "--build-arg", "SETTINGS_MODULE=app.settings_production", "-t", "gcr.io/$PROJECT_ID/<image_name>", "."]
images:
- gcr.io/$PROJECT_ID/<image_name>
gcloud builds submit --config=cloudbuild.yaml
Update
I replaced ubuntu:latest (==20.04) with debian:buster-slim and it worked.
Previously
Deploying to Cloud Run, I receive the error too...I suspect it's the PORT, investigating. Not the PORT. Curiously, the image runs locally. Trying a different OS!
I repro'd your Dockerfile and cloudbuild.yaml in a project and the build and run succeed for me:
docker run \
--interactive --tty \
--env=PORT=8888 \
gcr.io/${PROJECT}/67486954
[2021-05-11 16:09:44 +0000] [1] [INFO] Starting gunicorn 20.1.0
[2021-05-11 16:09:44 +0000] [1] [INFO] Listening at: http://0.0.0.0:8888 (1)
NOTE To build from a Dockerfile, you need not create a cloudbuild.yaml and can just gcloud builds submit --tag gcr.io/PROJECT_ID/...
A good way to diagnose the issue is to run the docker build locally:
docker build \
--build-arg=SETTINGS_MODULE=app.settings_production \
--tag=gcr.io/$PROJECT_ID/<image_name> \
.
And then attempt to run it:
docker run \
--interactive --tty --rm \
gcr.io/$PROJECT_ID/<image_name>
This isolates Cloud Build as the issue and will likely result in the same error.
The error suggests that the container isn't finding a shell (/bin/sh) in the ubuntu:latest image which is curious.
I think you can|should drop the `exec` after `CMD`
NOTE I read through Google's tutorial and see that the instructions include CMD exec ..., I'm unclear why that would be necessary but presumably it's not a problem.
Can you run the gunicorn command locally without issue??
/usr/local/bin/gunicorn -b :$PORT -w 4 -k uvicorn.workers.UvicornWorker app.main:app --chdir /code
The placement of --chdir /code is curious too. How about:
WORKDIR code
COPY . .
...
CMD /usr/local/bin/gunicorn -b :$PORT -w 4 -k uvicorn.workers.UvicornWorker app.main:app
Hmmm link perhaps move the --chdir before the app.main:app too so that it's applied to the gunicorn rather than your app.
/usr/local/bin/gunicorn -b :$PORT -w 4 -k uvicorn.workers.UvicornWorker --chdir /code app.main:app
I'm using ECS for the first time. I have dockerized my Django 2.2 application and using ECS and uwsgi to run the Django application in production.
While in the development environment, I had to run three commands to run Django server, celery and celery beat
python manage.py runserver
celery -A qcg worker -l info
celery beat -A qcg worker -l info
Where qcg is my application.
My Dockerfile has following uwsgi configuration
EXPOSE 8000
ENV UWSGI_WSGI_FILE=qcg/wsgi.py
ENV UWSGI_HTTP=:8000 UWSGI_MASTER=1 UWSGI_HTTP_AUTO_CHUNKED=1 UWSGI_HTTP_KEEPALIVE=1 UWSGI_LAZY_APPS=1 UWSGI_WSGI_ENV_BEHAVIOR=holy
ENV UWSGI_WORKERS=2 UWSGI_THREADS=4
ENV UWSGI_STATIC_MAP="/static/=/static_cdn/static_root/" UWSGI_STATIC_EXPIRES_URI="/static/.*\.[a-f0-9]{12,}\.(css|js|png|jpg|jpeg|gif|ico|woff|ttf|otf|svg|scss|map|txt) 315360000"
USER ${APP_USER}:${APP_USER}
ENTRYPOINT ["/app/scripts/docker/entrypoint.sh"]
The entrypoint.sh file has
exec "$#"
I have created the ECS task definition and in the container's command input, I have
uwsgi --show-config
This starts the uwsgi server.
Now I'm running 1 EC2 instance in the cluster and running one service with two instances of the task definition.
I couldn't figure out how to run the celery task and celery beat in my application.
Do I need to create separate tasks for running celery and celery-beat?
Yes, you need to run separate ECS tasks (or separate ECS services) for celery beat and celery worker. Celery Beat will send the Celery tasks to the Celery worker.
I use separate Dockerfiles for Celery, Celery beat, and Django.
Worker Dockerfile something like this:
FROM python:3.8
ENV PYTHONUNBUFFERED 1
ADD requirements.txt /requirements.txt
RUN pip install -r /requirements.txt
ADD . /src
WORKDIR /src
CMD ["celery", "-A", "<appname>", "worker"]
and Beat Dockerfile:
FROM python:3.8
ENV PYTHONUNBUFFERED 1
ADD requirements.txt /requirements.txt
RUN pip install -r /requirements.txt
ADD . /src
WORKDIR /src
CMD ["celery", "-A", "<appname>", "beat"]
I've created a docker image for django rest project, with following Dockerfile and docker-compose file,
Dockerfile
FROM python:3
# Set environment variables
ENV PYTHONUNBUFFERED 1
COPY requirements.txt /
# Install dependencies.
RUN pip install -r /requirements.txt
# Set work directory.
RUN mkdir /app
WORKDIR /app
# Copy project code.
COPY . /app/
EXPOSE 8000
docker-compose file
version: "3"
services:
dj:
container_name: dj
build: django
command: python manage.py runserver 0.0.0.0:8000
volumes:
- ./django:/app
ports:
- "8000:8000"
And docker-compose up command bring up the server like this,
but in web browser i can't access the server, browser says ERR_ADDRESS_INVALID
Docker version 18.09.2
0.0.0.0 is IPv4 for "everywhere"; you can't usually make outbound connections to it. If you have a Docker Desktop application, try http://localhost:8000; if it's Docker Toolbox, you'll need the docker-machine ip address, usually http://192.168.99.100:8000.
thanks to David Maze problem is solved.
This question already has answers here:
Are a WSGI server and HTTP server required to serve a Flask app?
(3 answers)
Closed 3 years ago.
I have simple Flask application (simply shows "Hello world"), I would like to deploy it on AWS Elastic BeanStalk. Multiple tutorial show deployment with nginx and gunicorn.
1) I don't understand why we need to use nginx, gunicorn is already a web-server to replace Flask build-in web server.
2) Tutorials show how to build two Docker containers: one for Flask and gunicorn and another for nginx. Why do I need two containers, can I package all in one? With two containers I cannot use Single Container Docker, I need to use Multicontainer Docker.
Any thoughts?
Usually in this trio nginx is used as reverse proxy.
It is possible to package flask+gunicorn+nginx in the same docker container:
For example:
FROM python:3.6.4
# Software version management
ENV NGINX_VERSION=1.13.8-1~jessie
ENV GUNICORN_VERSION=19.7.1
ENV GEVENT_VERSION=1.2.2
# Environment setting
ENV APP_ENVIRONMENT production
# Flask demo application
COPY ./app /app
RUN pip install -r /app/requirements.txt
# System packages installation
RUN echo "deb http://nginx.org/packages/mainline/debian/ jessie nginx" >> /etc/apt/sources.list
RUN wget https://nginx.org/keys/nginx_signing.key -O - | apt-key add -
RUN apt-get update && apt-get install -y nginx=$NGINX_VERSION
&& rm -rf /var/lib/apt/lists/*
# Nginx configuration
RUN echo "daemon off;" >> /etc/nginx/nginx.conf
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx.conf /etc/nginx/conf.d/nginx.conf
# Gunicorn installation
RUN pip install gunicorn==$GUNICORN_VERSION gevent==$GEVENT_VERSION
# Gunicorn default configuration
COPY gunicorn.config.py /app/gunicorn.config.py
WORKDIR /app
EXPOSE 80 443
CMD ["nginx", "-g", "daemon off;"]
I have an application written in Django and I am trying to run it in docker on Digital Ocean droplet. Currently I have two files.
Can anybody advise how to get rid of docker-compose.yml file and integrate all the commands within Dockerfile ???
Dockerfile
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
COPY . /code/
RUN pip install -r reqirements.txt
RUN python /code/jk/manage.py collectstatic --noinput
docker-compose.yml
version: '3'
services:
web:
build: .
command: python jk/manage.py runserver 0.0.0.0:8081
volumes:
- .:/code
ports:
- "8081:8081"
I run my application and docker image like following:
docker-compose run web python jk/manage.py migrate
docker-compose up
output:
Starting workspace_web_1 ...
Starting workspace_web_1 ... done
Attaching to workspace_web_1
web_1 | Performing system checks...
web_1 |
web_1 | System check identified no issues (0 silenced).
web_1 | December 02, 2017 - 09:20:51
web_1 | Django version 1.11.3, using settings 'jk.settings'
web_1 | Starting development server at http://0.0.0.0:8081/
web_1 | Quit the server with CONTROL-C.
...
Ok so I have take the following approach:
Dockerfile
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
COPY . /code/
RUN pip install -r reqirements.txt
RUN python /code/jk/manage.py collectstatic --noinput
then I ran:
docker build -t "django:v1" .
So docker images -a throws:
docker images -a
REPOSITORY TAG IMAGE ID CREATED SIZE
django v1 b3dec6aaf9b9 5 minutes ago 949MB
<none> <none> 55370397f7f2 5 minutes ago 948MB
<none> <none> e7eba7113203 7 minutes ago 693MB
<none> <none> dc3d7705c45a 7 minutes ago 691MB
<none> <none> 12825382746d 7 minutes ago 691MB
<none> <none> 2304087e8b82 7 minutes ago 691MB
python 3 79e1dc9af1c1 3 weeks ago 691MB
And finally I ran:
cd /opt/workspace
docker run -d -v /opt/workspace:/code -p 8081:8081 django:v1 python jk/manage.py runserver 0.0.0.0:8081
Two simple questions:
do i get it right that each <none> listed image is created when running docker build -t "django:v1" . command to build up my image ... So it means that it consumes like [(691 x 4) + (693 x 1) + (948) + (949)]MB of disk space ??
Is it better to use gunicorn or wsgi program to run django in production ?
And responses from #vmonteco:
I think so, but a way to reduce the size taken by your images is to reduce their number by using a single RUN directive for several chained commands in your Dockerfile. (RUN cmd1 && cmd2 rather than RUN cmd1 then RUN cmd
It's up to you to make your own opinion. I personnally use uWSGI but there even are other choices than gunicorn/uwsgi (Not just "wsgi", that's the name of a specification for interface, not a programm). Have fun finding your prefered one! :)
TL;DR
You can pass some informations to your Dockefile (the command to run) but that wouldn't be equivalent and you can't do that with all the docker-compose.yml file content.
You can replace your docker-compose.yml file with commands lines though (as docker-compose is precisely to replace it).
In your case you can add the command to run to your Dockerfile as a default command (which isn't roughly the same as passing it to containers you start at runtime) :
CMD ["python", "jk/manage.py", "runserver", "0.0.0.0:8081"]
or pass this command directly in command line like the volume and port which should give something like :
docker run -d -v .:/code -p 8081:8080 yourimage python jk/manage.py runserver 0.0.0.0:8081
BUT
Keep in mind that Dockerfiles and docker-compose serve two whole different purposes.
Dockerfile are meant for image building, to define the steps to build your images.
docker-compose is a tool to start and orchestrate containers to build your applications (you can add some informations like the build context path or the name for the images you'd need, but not the Dockerfile content itself).
So asking to "convert a docker-compose.yml file into a Dockerfile" isn't really relevant.
That's more about converting a docker-compose.yml file into one (or several) command line(s) to start containers by hand.
The purpose of docker-compose is precisely to get rid of these command lines to make things simpler (it automates it).
also :
From the manage.py documentation:
DO NOT USE THIS SERVER IN A PRODUCTION SETTING. It has not gone
through security audits or performance tests. (And that’s how it’s
gonna stay.
Django's runserver included in the manage.py tool isn't meant for production.
You might want to consider using a WSGI server behind a proxy.