(beginner question)
I've successfully setup a nginx+gunicorn+django docker image on a Digital Ocean droplet.
My Django project follows the very good Cookie-Cutter-Django pattern (see here).
In this doc, there is a description of a supervisor install.
What I'm missing here is WHERE is the supervisor supposed to be running? Local or remotely?
I understand that if I install the supervisor on my laptop it will "keep-alive" my command "docker-compose up".
But what if I take 1 week off and my laptop runs out off battery?
Will the supervisor stop its job?
If so, I need to install it on my droplet, right?
Supervisor should run on your droplet. It will make sure that your webserver restarts automatically if it ever gets interrupted. An example configuration would be something like the following from this excellent blog post:
[program:hello]
command = /webapps/hello_django/bin/gunicorn_start ; Command to start app
user = hello ; User to run as
stdout_logfile = /webapps/hello_django/logs/gunicorn_supervisor.log ; Where to write log messages
redirect_stderr = true ; Save stderr in the same log
environment=LANG=en_US.UTF-8,LC_ALL=en_US.UTF-8 ; Set UTF-8 as default encoding
I was a bit confused.
This SO post was helpful : Is supervisord needed for docker+gunicorn+nginx?
As for this tuto: https://blog.codeship.com/ensuring-containers-are-always-running-with-dockers-restart-policy/
I've now added a "restart: always" on my compose.yml file:
redis:
image: redis:latest
restart: always
Related
I'm not referring to more sophisticated debugging techniques, but how to get access to the same kind of error messages that are normally directed to terminal tabs?
Basically I'm adopting Docker in a Django project also using Redis.
In the old way of working I opened a linux terminal tab for gunicorn like this: gunicorn --reload --bind 0.0.0.0:8001 myapp.wsgi:application
And this tab kept running Gunicorn and any Python error was shown in this tab so I could see the problem and fix it.
I could also open a second tab for the celery woker: celery -A myapp worker --pool=solo -l info
The same thing happened, the tab was occupied by Celery and any Python error in a task was shown in the tab and I could see the problem and correct the code.
My question is: Using docker is there a way to make each of the containers direct these same errors that would previously go to the screen, to go to log files so that I can debug my code when an error occurs in Python?
What is the correct way to handle simple debugging during development using Docker containers?
After looking more about this in the docker documentation I found a link that solves this problem: View logs for a container or service
Basically the command "docker logs CONTAINER_ID" shows on the screen exactly what we would see in the terminal running the application.
Works perfectly to see Django, Redis and Angular logs.
Just type:
docker logs CONTAINER_ID
Replace the container_id keyword with the real id of the container you want to log in.
To find the id type:
docker ps
After quite a bit of trial and error and a step by step attempt to find solutions I thought I share the problems here and answer them myself according to what I've found. There is not too much documentation on this anywhere except small bits and pieces and this will hopefully help others in the future.
Please note that this is specific to Django, Celery, Redis and the Digital Ocean App Platform.
This is mostly about the below errors and further resulting implications:
OSError: [Errno 38] Function not implemented
and
Cannot connect to redis://......
The first error happens when you try run the celery command celery -A your_app worker --beat -l info
or similar on the App Platform. It appears that this is currently not supported on digital ocean. The second error occurs when you make a number of potential mistakes.
PART 1:
While Digital Ocean might remedy this in the future here is an approach that will offer a workaround. The problem is the not supported execution pool. Google "celery execution pools" if you want to know more and how they work. The default one is prefork. But what you need is either gevent or eventlet. I went with the former for my purposes.
Whichever you pick you will have to install it as it doesn't come with celery by default. In my case it was: pip install gevent (and don't forget adding it to your requirements as well).
Once you have that you can re-run the celery command but note that gevent and beat are not supported within a single command (will result in an error). Instead do the following:
celery -A your_app worker --pool=gevent -l info
and then separately (if you want to run beat that is) in another terminal/console
celery -A your_app beat -l info
In the first line you can also specify the concurrency like so: --concurrency=100. This is not required but useful. Read up on it what it does as that goes beyond the solution here.
PART 2:
In my specific case I've tested the above locally (development) first to make sure they work. The next issue was getting this into production. I use Redis as the db/broker.
In my specific setup I have most of my celery configuration in the_main_app/celery/__init__.py file but sometimes people put it directly into the_main_app/celery.py. Whichever it is you do make sure that the REDIS_URL is set correctly. For development it usually looks something like this:
YOUR_VAR_NAME = os.environ.get('REDIS_URL', 'redis://localhost:6379') where YOUR_VAR_NAME is then set to the broker with everything as below:
YOUR_VAR_NAME = os.environ.get('REDIS_URL', 'redis://localhost:6379')
app = Celery('the_main_app')
app.conf.broker_url = YOUR_VAR_NAME
The remaining settings are all documented on the "celery first steps with django" help page but are not relevant for what I am showing here.
PART 3:
When you setup your Redis Database on the App Platform (which is very simple) you will see the connection details as 'public network' and 'VPC network'.
The celery documentation says to use the following URL format for production: redis://:password#hostname:port/db_number. This didn't work. If you are not using a yaml file then you can simply copy paste the entire connection string (select from the dropdown!) from the Redis DB connection details and then setup an App-Level environment variable in your Digital Ocean project named REDIS_URL and paste in that entire string (and also encrypt it!).
The string should look like something like this (redis with 2 s!)
rediss://USER:PASS#URL.db.ondigitialocean.com:PORT.
You are almost done. The last step is to setup the workers. It was fine for me to run the PART 1 commands as console commands on the App Platform to test them but eventually I've setup a small worker (+ Add Component) for each line pasted them into the Run Command.
That is basically the process step by step. Good luck!
I clone this repo (it's pretty much based on docker docs here) and run docker-compose up. Docker builds the 2 containers and I see the output from db_1 (psql looks to be completely ready) but nothing at all from web_1, no output whatsoever.
I go to my host IP + 8000 and nothing is running there. I am using docker toolbox for mac. It's pretty much the simplest possible example of using Docker - any idea why I'm not seeing anything from my Django container?
Thanks in advance,
it might be possible that STDOUT of the web_1 Container is mapped only to display WARN and ERROR level. You say youre using Docker Toolbox for Mac? Have you tried to reach the Website over the IP of the DockerToolBox VM or the HostIP? Im not quite aware with DockerToolbox since there is an native MacClient (https://docs.docker.com/engine/installation/mac/). Maybe try to reach the DockerToolboxIp not HostIP. I would also recommend to use Docker for Mac native, since i had problems with the ToolBox but none with the "Native" Client.
Hope i could Help
After taking a better look to the documentation I was able to start your containers.
After the git clone:
cd sane-django-docker
docker-compose up -d
This is the output
Starting sanedjangodocker_db_1
Starting sanedjangodocker_web_1
[root#localhost sane-django-docker]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
cde9e93c1a70 sanedjangodocker_web "python3 manage.py ru" 19 seconds ago Up 1 seconds 0.0.0.0:8000->8000/tcp sanedjangodocker_web_1
73ad8cafe798 postgres:9.4 "/docker-entrypoint.s" 20 seconds ago Up 1 seconds 5432/tcp sanedjangodocker_db_1
When I just performd docker-compose up (running in the forground I saw this issue).
LOG: shutting down
LOG: database system is shut down
After taking a better look in the documentation I saw the problem
Django will complain about the postgres database not existing so we'll
create one:
docker exec sanedjangodocker_db_1 createdb -Upostgres webapp
Now the postgres is fine but I had to restart the webapp to find the db.
docker restart sanedjangodocker_web_1
Now I'm able to acces it on IP:8000
It worked!
Congratulations on your first Django-powered page.
I don't know how the django app really works but the setup is pretty strange.
My current objective is to have Travis deploy our Django+Docker-Compose project upon successful merge of a pull request to our Git master branch. I have done some work setting up our AWS CodeDeploy since Travis has builtin support for it. When I got to the AppSpec and actual deployment part, at first I tried to have an AfterInstall script do docker-compose build and then have an ApplicationStart script do docker-compose up. The containers that have images pulled from the web are our PostgreSQL container (named db, image aidanlister/postgres-hstore which is the usual postgres image plus the hstore extension), the Redis container (uses the redis image), and the Selenium container (image selenium/standalone-firefox). The other two containers, web and worker, which are the Django server and Celery worker respectively, use the same Dockerfile to build an image. The main command is:
CMD paver docker_run
which uses a pavement.py file:
from paver.easy import task
from paver.easy import sh
#task
def docker_run():
migrate()
collectStatic()
updateRequirements()
startServer()
#task
def migrate():
sh('./manage.py makemigrations --noinput')
sh('./manage.py migrate --noinput')
#task
def collectStatic():
sh('./manage.py collectstatic --noinput')
# find any updates to existing packages, install any new packages
#task
def updateRequirements():
sh('pip install --upgrade -r requirements.txt')
#task
def startServer():
sh('./manage.py runserver 0.0.0.0:8000')
Here is what I (think I) need to make happen each time a pull request is merged:
Have Travis deploy changes using CodeDeploy, based on deploy section in .travis.yml tailored to our CodeDeploy setup
Start our Docker containers on AWS after successful deployment using our docker-compose.yml
How do I get this second step to happen? I'm pretty sure ECS is actually not what is needed here. My current status right now is that I can get Docker started with sudo service docker start but I cannot get docker-compose up to be successful. Though deployments are reported as "successful", this is only because the docker-compose up command is run in the background in the Validate Service section script. In fact, when I try to do docker-compose up manually when ssh'd into the EC2 instance, I get stuck building one of the containers, right before the CMD paver docker_run part of the Dockerfile.
This took a long time to work out, but I finally figured out a way to deploy a Django+Docker-Compose project with CodeDeploy without Docker-Machine or ECS.
One thing that was important was to make an alternate docker-compose.yml that excluded the selenium container--all it did was cause problems and was only useful for local testing. In addition, it was important to choose an instance type that could handle building containers. The reason why containers couldn't be built from our Dockerfile was that the instance simply did not have the memory to complete the build. Instead of a t1.micro instance, an m3.medium is what worked. It is also important to have sufficient disk space--8GB is far too small. To be safe, 256GB would be ideal.
It is important to have an After Install script run service docker start when doing the necessary Docker installation and setup (including installing Docker-Compose). This is to explicitly start running the Docker daemon--without this command, you will get the error Could not connect to Docker daemon. When installing Docker-Compose, it is important to place it in /opt/bin/ so that the binary is used via /opt/bin/docker-compose. There are problems with placing it in /usr/local/bin (I don't exactly remember what problems, but it's related to the particular Linux distribution for the Amazon Linux AMI). The After Install script needs to be run as root (runas: root in the appspec.yml AfterInstall section).
Additionally, the final phase of deployment, which is starting up the containers with docker-compose up (more specifically /opt/bin/docker-compose -f docker-compose-aws.yml up), needs to be run in the background with stdin and stdout redirected to /dev/null:
/opt/bin/docker-compose -f docker-compose-aws.yml up -d > /dev/null 2> /dev/null < /dev/null &
Otherwise, once the server is started, the deployment will hang because the final script command (in the ApplicationStart section of my appspec.yml in my case) doesn't exit. This will probably result in a deployment failure after the default deployment timeout of 1 hour.
If all goes well, then the site can finally be accessed at the instance's public DNS and port in your browser.
I've been trying to write a vagrant file to start up my docker container to run a small web app I've been writing. However when I try use vagrant up I eventually get an error saying
The container started either never left the "stopped" state or
very quickly reverted to the "stopped" state. This is usually
because the container didn't execute a command that kept it running,
and usually indicates a misconfiguration.
If you meant for this container to not remain running, please
set the Docker provider configuration "remains_running" to "false":
config.vm.provider "docker" do |d|
d.remains_running = false
end
I'm very new to vagrant so I'm not really sure what the best way to try and fix the problem is.
My vagrant file contains
Vagrant.configure("2") do |config|
config.vm.synced_folder "thelibrary", "/thelibrary"
config.vm.provider "docker" do |d|
d.image = "django-dev"
d.has_ssh = false
d.ports = ["8000:8000"]
d.cmd = ["python", "/thelibrary/manage.py", "runserver", "0.0.0.0:8000"]
end
end
I'm not sure why it says the command doesn't keep running. I can run the docker container with the same command and it will spin up my django app without any issues.
I had the same problem but adding option
d.create_args = ["-i"]
solved my problem
I spent the day try to get the docker machine running.. finally got it working. Here is what I have in my vangrantfile, hope this can at least get you started:
config.vm.provider :docker do |d|
d.image = "paintedfox/postgresql"
d.name = "db"
d.cmd = ["/sbin/my_init", "--enable-insecure-key"]
end
vagrant status returns me this:
Current machine states:
dev running (docker)
Another solution that you can try is to remove all your existing images and start fresh, it could be that your image is broken.