Application served by uWSGI with Supervisord from Docker - django

I am trying to serve a Django application with uWSGI from Docker. I am using supervisord to start the process for me at the end of the Dockerfile. When I run the image, it says that the uWSGI process starts and succeeds, but I'm unable to view the application at the URL I thought would display it. Perhaps I do not have things set up/configured correctly?
I am not having supervisord start nginx right now because I am currently serving static files via Amazon S3, and want to first focus on getting the wsgi up and running.
I am successful in running the application using uwsgi locally by doing uwsgi --init uwsgi.ini:local, but I am having trouble moving it into docker.
Here is my Dockerfile
FROM ubuntu:14.04
# Get most recent apt-get
RUN apt-get -y update
# Install python and other tools
RUN apt-get install -y tar git curl nano wget dialog net-tools build-essential
RUN apt-get install -y python3 python3-dev python-distribute
RUN apt-get install -y nginx supervisor
# Get Python3 version of pip
RUN apt-get -y install python3-setuptools
RUN easy_install3 pip
RUN pip install uwsgi
RUN apt-get install -y python-software-properties
# Install GEOS
RUN apt-get -y install binutils libproj-dev gdal-bin
# Install node.js
RUN apt-get install -y nodejs npm
# Install postgresql dependencies
RUN apt-get update && \
apt-get install -y postgresql libpq-dev && \
rm -rf /var/lib/apt/lists
ADD . /home/docker/code
# Setup config files
RUN echo "daemon off;" >> /etc/nginx/nginx.conf
RUN rm /etc/nginx/sites-enabled/default
RUN ln -s /home/docker/code/nginx-app.conf /etc/nginx/sites-enabled/
RUN ln -s /home/docker/code/supervisor-app.conf /etc/supervisor/conf.d/
RUN pip install -r /home/docker/code/app/requirements.txt
EXPOSE 8080
CMD ["supervisord", "-c", "/home/docker/code/supervisor-app.conf", "-n"]
And here is my uwsgi.ini
[uwsgi]
# this config will be loaded if nothing specific is specified
# load base config from below
ini = :base
# %d is the dir this configuration file is in
socket = %dmy_app.sock
master = true
processes = 4
[dev]
ini = :base
# socket (uwsgi) is not the same as http, nor http-socket
socket = :8001
[local]
ini = :base
http = :8000
# set the virtual env to use
home=/Users/my_user/.virtualenvs/my_env
[base]
# chdir to the folder of this config file, plus app/website
chdir = %dmy_app/
# load the module from wsgi.py, it is a python path from
# the directory above.
module=my_app.wsgi:application
# allow anyone to connect to the socket. This is very permissive
chmod-socket=666
http = :8080
And here is my supervisor-app.conf file
[program:app-uwsgi]
command = /usr/local/bin/uwsgi --ini /home/docker/code/uwsgi.ini
From a MAC using boot2docker, I am trying to access the application at $(boot2docker ip):8080
Ultimately I want to upload this container to AWS Elastic Beanstalk, with not only a uWSGI process running, but a celery worker running as well.
When I run my container, I can see from the logs that both supervisor and uwsgi successfully start. I was able to get things running on my local machine both using uwsgi by itself and uwsgi through supervisor, but for some reason when I containerize the thing I can't find it anywhere.
Here is what is logged when I boot up the docker container
2014-12-25 15:08:03,950 CRIT Supervisor running as root (no user in config file)
2014-12-25 15:08:03,953 INFO supervisord started with pid 1
2014-12-25 15:08:04,957 INFO spawned: 'uwsgi' with pid 9
2014-12-25 15:08:05,970 INFO success: uwsgi entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)

How are you starting the docker container?
I don't see any CMD or ENTRYPOINT script, so I'm unclear as to how anything is getting started.
In general, I would advise avoiding things like supervisord unless absolutely necessary, just start uWSGI in the foreground from a CMD line. Try adding the following as the last line in the Dockerfile:
CMD ["/usr/local/bin/uwsgi", "--ini", "/home/docker/code/uwsgi.ini"]
and then just run with docker run -p 8000:8000 image_name. You should get some reply from uWSGI. If that works, I recommend you move the other services (postgres, node, to separate containers). There are official images for Node, Python and Postgres which should save you some time.
Remember, Docker containers only run as long as their main process (which must be in the foreground).

Related

How to Deploy Multiple Django Apps on Single server using Ngnix & GUnicorn?

I am trying to Deploy Two Django apps with single AWS EC2 Instance having same IP.
But it always failed when I have added the second App.sock and test Supervisor.
I fond some body asked similar question before. but Not answered properly, and my use case is little different. ( Run multiple django project with nginx and gunicorn )
I have followed these steps:
.
Cloned my project from Git *
pip install -r requiernments.txt
pip3 install gunicorn
sudo apt-get install nginx -y
sudo apt-get install supervisor -y
cd /etc/supervisor/conf.d
sudo touch testapp2.conf
sudo nano testapp2.conf
Updated config file same as below
[program:gunicorn]
directory=/home/ubuntu/projects/testapp2/testerapp
command=/home/ubuntu/projects/testapp2/venv/bin/gunicorn --workers 3 --bind unix:/home/ubuntu/projects/testapp2/testerapp/app.sock testerapp.wsgi:application
autostart=true
autorestart=true
stderr_logfile=/home/ubuntu/projects/testapp2/log/gunicorn.err.log
stdout_logfile=/home/ubuntu/projects/testapp2/log/gunicorn.out.log
[group:guni]
programs:gunicorn
*----------
sudo supervisorctl reread
sudo supervisorctl update
sudo supervisorctl status
The below steps will work and Site available on browser if there is only One configuration above. But when i have added an additional configuration, it shows 502 Bad Gateway on the Browser. Please help me to solve this issue.
You can add one more config file in supervisor conf.d and use different port numbers in the different Django apps.

i am not able execute next commands after localstack --host command

FROM ubuntu:18.04
RUN apt-get update -y && \
apt-get install -y apt-utils && \
apt-get install -y python3-pip python3-dev\
pypy-setuptools
COPY . .
WORKDIR .
RUN pip3 install boto3
RUN pip3 install awscli
RUN apt-get install libsasl2-dev
ENV HOST_TMP_FOLDER=/tmp/localstack
RUN apt-get install -y git
RUN apt-get install -y npm
RUN mkdir -p .localstacktmp
ENV TMPDIR=.localstacktmp
RUN pip3 install localstack[full]
RUN SERVICES=s3,lambda,es DEBUG=1 localstack start --host
WORKDIR ./boto3Tools
ENTRYPOINT [ "python3" ]
CMD [ "script.py" ]
You can't start services in a Dockerfile.
In your case what's happening is that your Dockerfile is running RUN localstack start. That goes ahead and starts up the selected set of services and stays running, waiting for connections. Meanwhile, the Dockerfile is waiting for the command you launched to finish before it moves on.
The usual answer to this is to start servers and clients in separate containers (or start a server in a container and run clients directly from your host). In this case, there is already a localstack/localstack Docker image and a prebuilt Docker Compose setup, so you can just run it:
curl -LO https://github.com/localstack/localstack/raw/master/docker-compose.yml
docker-compose up
The localstack GitHub repo has more information on using it.
If you wanted to use a Boto-based application with this, the easiest way is to add it to the same docker-compose.yml file (or, conversely, add Localstack to the Compose setup you already have). At this point you can use normal Docker inter-container communication to reach the mock AWS, but you have to configure this in your code
s3 = boto3.client('s3',
endpoint_url='http://localstack:4566')
You have to make similar changes anyways to use localstack, so the only difference is the hostname you're setting.

Chained docker image start httpd and php

I've got 2 Docker containers: httpd-container and php-container.
httpd-container dockerfile:
FROM centos:latest
RUN yum -y install httpd
RUN sed -i 's/AllowOverride None/AllowOverride all/g' /etc/httpd/conf/httpd.conf
CMD ["/usr/sbin/httpd", "-D", "FOREGROUND"]
php-container dockerfile
FROM httpd-container:latest
RUN yum -y install php php-cli php-fpm php-mysqlnd php-zip php-devel php-gd php-mbstring php-curl php-xml php-pear php-bcmath php-json
RUN mkdir /run/php-fpm
CMD ["/usr/sbin/httpd", "-D", "FOREGROUND"]
Currently I have to run /usr/sbin/php-fpm in php-container after it starts.
I`ve have tried putting both commands in one script and setting that as entry point, but it does not find it.
I've tried running supervisord and got errors as well.
Any advice is deeply appreciated!
Change php-container dockerfile to
FROM httpd-container:latest
RUN yum -y install php php-cli php-fpm php-mysqlnd php-zip php-devel php-gd php-mbstring php-curl php-xml php-pear php-bcmath php-json
RUN mkdir /run/php-fpm
COPY ./start_services.sh /
CMD ["/start_services.sh"]
create start_services.sh with
#!/bin/sh
/usr/sbin/php-fpm
/usr/sbin/httpd -D FOREGROUND

Docker Django image running but not found on localhost

I am trying to run a Django docker image. The image itself is running in the command line without any errors. However when I go to the URL my image is hosted at, the web page is not found.
Below is my docker file I used to build the image
FROM python:3.6
MAINTAINER c15523957
RUN apt-get -y update
RUN apt-get -y upgrade
RUN apt-get -y install libgdal-dev
RUN mkdir -p /usr/src/app
COPY requirements.txt /usr/src/app/
COPY . /usr/src/app
WORKDIR /usr/src/app
RUN pip install --upgrade pip
RUN pip install --no-cache-dir -r requirements.txt
EXPOSE 8000
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
Below is the command used to run the image
docker run -it -p8001:8000 c15523957/backendimage7
The console logs the message
System check identified no issues (0 silenced).
December 11, 2018 - 15:05:03
Django version 2.1.3, using settings 'backendproject.settings'
Starting development server at http://0.0.0.0:8000/
Quit the server with CONTROL-C.
However when I go to the browser the web page is not found.
Note: When I run the Django application without the Docker image the webpage is seen. This is done with
python manage.py runserver
SOLUTION
I was using docker toolbox for windows. Instead of local dcker toolbox binds to the address 192.168.99.100 . So by going to port ....
192.168.99.100:8001
I was able to access my web page

Amazon S3 + Docker - "403 Forbidden: The difference between the request time and the current time is too large"

I am trying to run my django application in a docker container with static files served from Amazon S3. When I run RUN $(which python3.4) /home/docker/code/vitru/manage.py collectstatic --noinput in my Dockerfile, I get a 403 Forbidden error from Amazon S3 with the following response XML
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>RequestTimeTooSkewed</Code>
<Message>The difference between the request time and the current time is too large.</Message>
<RequestTime>Sat, 27 Dec 2014 11:47:05 GMT</RequestTime>
<ServerTime>2014-12-28T08:45:09Z</ServerTime>
<MaxAllowedSkewMilliseconds>900000</MaxAllowedSkewMilliseconds>
<RequestId>4189D5DAF2FA6649</RequestId>
<HostId>lBAhbNfeV4C7lHdjLwcTpVVH2snd/BW18hsZEQFkxqfgrmdD5pgAJJbAP6ULArRo</HostId>
</Error>
My docker container is running Ubuntu 14.04... if that makes any difference.
I also am running the application using uWSGI, without nginx or apache or any other kind of reverse-proxy server.
I also get the error at run-time, when the files are being served to the site.
Attempted Solution
Other stackoverflow questions have reported a similar error using S3 (none specifically in conjunction with Docker) and they have said that this error is caused when your system clock is out of sync, and can be fixed by running
sudo service ntp stop
sudo ntpd -gq
sudo service ntp start
so I added the following to my Dockerfile, but it didn't fix the problem.
RUN apt-get install -y ntp
RUN ntpd -gq
RUN service ntp start
I also attempted to sync the time on my local machine before building the docker image, using sudo ntpd -gq, but that did not work either.
Dockerfile
FROM ubuntu:14.04
# Get most recent apt-get
RUN apt-get -y update
# Install python and other tools
RUN apt-get install -y tar git curl nano wget dialog net-tools build-essential
RUN apt-get install -y python3 python3-dev python-distribute
RUN apt-get install -y nginx supervisor
# Get Python3 version of pip
RUN apt-get -y install python3-setuptools
RUN easy_install3 pip
# Update system clock so S3 does not get 403 Error
# NOT WORKING
#RUN apt-get install -y ntp
#RUN ntpd -gq
#RUN service ntp start
RUN pip install uwsgi
RUN apt-get -y install libxml2-dev libxslt1-dev
RUN apt-get install -y python-software-properties uwsgi-plugin-python3
# Install GEOS
RUN apt-get -y install binutils libproj-dev gdal-bin
# Install node.js
RUN apt-get install -y nodejs npm
# Install postgresql dependencies
RUN apt-get update && \
apt-get install -y postgresql libpq-dev && \
rm -rf /var/lib/apt/lists
# Install pylibmc dependencies
RUN apt-get update
RUN apt-get install -y libmemcached-dev zlib1g-dev libssl-dev
ADD . /home/docker/code
# Setup config files
RUN ln -s /home/docker/code/supervisor-app.conf /etc/supervisor/conf.d/
RUN pip install -r /home/docker/code/vitru/requirements.txt
# Create directory for logs
RUN mkdir -p /var/logs
# Set environment as staging
ENV env staging
# Run django commands
# python3.4 is at /usr/bin/python3.4, but which works too
RUN $(which python3.4) /home/docker/code/vitru/manage.py collectstatic --noinput
RUN $(which python3.4) /home/docker/code/vitru/manage.py syncdb --noinput
RUN $(which python3.4) /home/docker/code/vitru/manage.py makemigrations --noinput
RUN $(which python3.4) /home/docker/code/vitru/manage.py migrate --noinput
EXPOSE 8000
CMD ["supervisord", "-c", "/home/docker/code/supervisor-app.conf"]
Noted in the comments but for others who come here:
If using boot2docker (i.e. If on windows or Mac), the boot2docker vm has a known time issue when you sleep your machine--see here. Since the host for your docker container is the boot2docker vm, that's where it syncs its time.
I've had success restarting the boot2docker vm. This may cause problems with losing some state, i.e. If you had some data volumes.
Docker containers share clock with the host machine, so syncing your host machine clock should solve the problem. To force the timezone of your container is the same as your host machine you can add -v /etc/localtime:/etc/localtime:ro in docker run.
Anyway, you should not start a service in a Dockerfile. This file contains the steps and commands to build the image for your containers, and any process you run inside a Dockerfile will end after the building process. To start any service you should add a run script or a process control daemon (as supervisord) which will run each time you run a new container.
Restarting Docker for Mac fixes the error on my machine.