I have web app in 3 containers running on linux server (ubuntu 20.04), Django webapp, nginx webserver and postgres db. When i run 'docker-compose ps' it does not show any container, or error, only headings, like there is no container, not even crashed one.
I am sure that it is the right folder as there is no other docker-compose.yml on this server.
It seems almost like app is not running there except that it is accessible via browser and working well.
I tried all kinds of commands for showing containers or images using docker and docker-compose with no result
I tried docker service restart - app went offline for a moment and then back online (I have 'restart: allways' set in compose file) also I restarted the whole server with the same result.
Even script which is making db dumb does not see containers and started do fail
When I try to bring up the project running docker-compose up webapp and db containers starts but webserver not because port is already taken
Have anyone experienced this? I work with docker-compose for a while but it never happened to me before, I don't know what to do, I need to update the code of this application and I don't want to lose data in DB (I am also not able to make dump or ssh to the container).
This app was working for years with the same configuration before, on the other server with Ubuntu 18.04. Maybe it is server related problem.
Thanks.
It sounds like there is some fundamental problem going on. Have you tried using simply docker ps to see if your containers are running, or to see if anything is running?
If the containers are listed in the docker ps output, make sure you have the names correct in your docker-compose.yml
If you don't see your containers running with docker ps then maybe they crashed immediately after start (and are therefore no longer running).
I would have expected docker-compose ps to have shown something - even if your containers are crashing.
Please provide more details of your output from docker ps and/or docker-compose ps and maybe the contents of your docker-compose.yml if these things don't help. You said "it doesn't show any container" when you run docker-compose ps - does it show anything at all (errors, blank lines, etc.)
Related
I am pretty new to Docker and Django. So what i did was, putty to a linux server and created a folder in the root directory and then using django-admin startproject I started a new project.
Now, since I am using putty for ssh terminal access, I will not be able to access a browser in the linux machine and then ping 127.0.0.1:8000 to see whether "congratulations!" screen by Django is visible or not.
So I assumed that the server might be running after runserver command. Then using docker I prepared a container in the linux machine where I have exposed port 9000. Also I cannot access this container since I cannot access the browser in the linux machine. Now, I have three questions below:
1.) How to access this docker container (inside the linux machine) using my windows machine? By this I mean, if I open up lets say google chrome browser in the windows machine, and enter some url:port, will I be able to see the "congratulations!" screen on my browser on windows?
2.) I am pretty confused with how this container network port and ip works (I mean how does host or any other pc access this docker container) I tried looking up on many documentation and youtube videos but I am very much confused. Because I know to make your website/app accessible to the external world we need domain name hosted on some cloud for which we need to pay, but how can docker do this for free? Might sound like a lame one, but please help me understand.
3.) How should my docker run command look like for accessing from my windows machine?
My dockerfile:
FROM python:3.6-slim
ENV PYTHONUNBUFFERED=1
RUN mkdir /Django
WORKDIR /Django
ADD . /Django
RUN pip install -r requirements.txt
EXPOSE 9000
CMD python manage.py runserver 0.0.0.0:9000
I am using the following command to build:
docker build -t myproj .
Please help clarifying my questions guy. I'll be forever grateful :)
Thanks all!
When you run the container, you need a docker run -p option:
docker run -p 12345:9000 myproj
The second port number must match the port number the actual server process is listening on (in your case, the port argument to ./manage.py runserver). The first port number can be any port number that's not otherwise in use on the host system.
Then (up to networking and firewall constraints) another system can reach the container by using the host's IP address and the first port number; http://my-dev-system.internal.example.com:12345. If you're calling from the host directly then these two systems are the same and in this special case you can use http://localhost:12345.
As an implementation detail the container happens to have its own IP address but you never need to look it up or use it. (Among other problems, it is not accessible from other machines.) From other systems' points of view a Docker-based process is indistinguishable from the process running directly on the host. Docker doesn't address the problems of needing somewhere to host the application, coming up with a DNS name for the host, or other similar concerns.
Try running it without EXPOSE 9000, when you are exposing port it's visible only inside of a container and not to the outer world. After doing so, go to a browser and navigate to <server_ip>:9000 and you will probably see the message.
I am trying to configure a CI job on Bamboo for a Django app, the tests to be run rely on a database (postgres 9.5). It seems that a prudent way to go about is it run the whole test in a docker container, as I do not control the agent environment so I cannot install Postgres there.
Most guides I found recommend running postgres and django in two separate containers and using docker-compose to easily manage them. In this scenario each docker image runs just one service, started with CMD. In Bamboo I cannot use docker-compose however, I need to use just one image, so I am trying to get Postgres and Django to run nicely together in one container but with little success so far.
My problem is that I see no easy way to start Postgres as a service inside docker but NOT as a docker CMD command, official postgre image uses an entrypoint.sh approach, also described in the official docker docs
But it is not clear to me how to implement that. I would appreciate your help!
Well, basically you would start postgres as a background process in the docker-entrypoint shell script that does otherwise start your django application.
The only trick here is that you need to put a 'trap' command in it so that you can send a shutdown/kill to the background process when your master process stops.
Although I have done that a thousand times, I know that it is a good source for programming errors. In general I do just use my docker-systemctl-replacement which takes care of running multiple applications as services, just as if the container is a virtual machine hosting multiple applications.
Your only other option is to add in a startup script in your Dockerfile, or kick it off as part of your docker run ... commands. We don't generally use the "Docker" tasks, as I find them ... distasteful (also why I usually just fall back to running a "Script" task, and directly calling docker run in that script task)
Anyway, you'd have to have your Docker container execute a script that would:
Start up Postgres (like a sudo systemctl start postgresql)
Execute your tests.
Your Dockerfile will have to install Postgresql and do some minor setup work I imagine (like create relevant users and databases with the proper owner). Since we're all good citizens, we remember to never run your containers as root, right?
Note - you can always hack around getting two containers to talk to each other without using docker-compose. It's a bit less convenient, but you could do something like:
docker run --detach --cidfile=db_cidfile --name ci_db postgresql_image
...
docker run --link ci_db testing_image
Make sure that you EXPOSE the right ports on the postgresql image to the testing_image container.
EDIT: I'm looking more at my specific case - we just install Postgresql into a base CentOS host rather than use the postgresql default image (using yum install http://yum.postgresql.org/..../pgdg-centos...rpm and then just install postgresql-server and postgresql-contrib packages from there). There is a CMD [ "/usr/pgsql-ver/bin/postgres", "-D", "/var/lib/pgsql/ver/data"] in our Dockerfile, too. We don't do anything fancy with the docker container, though. NOTE: we don't use this in production at all, this is strictly for local and CI testing.
I'm following this Docker tutorial, which creates a simple Docker-managed Django site, and when I try to run docker-compose up to launch my docker project, I get the ambiguous error:
ERROR: Couldn't connect to Docker daemon at http+docker://localunixsocket - is it running?
The error suggests that the Docker daemon isn't running, but service docker status shows the Docker daemon is running.
If instead I run sudo docker-compose up, then it succeeds, but it chowns a lot of my local development files to the root user, which is easy enough to fix, but annoying.
Why does Docker require root access just to start a local Django development server? How do I fix this?
My versions:
Docker version 18.06.1-ce, build e68fc7a
docker-compose version 1.11.1, build 7c5d5e4
Ubuntu 16.04.5 LTS
If you can run any Docker command at all, you can trivially root the host:
docker run --rm -v /:/host busybox \
cat /host/etc/shadow
Additionally, Docker containers frequently run as root within their own container space, which means that whatever parts of the host filesystem you choose to expose into them, they can make arbitrary changes as arbitrary user IDs. You can use a docker run -u option to pick a different user ID, but you can pick any user ID, even one that belongs to another user on a shared system.
It is very reasonable to use sudo as a way to get root privileges for things that need it, and this is a typical out-of-the-box Docker configuration.
At the end of the day the only real gate on this is the Unix permissions on the file /var/run/docker.sock. This is often mode 0660 owned by a dedicated docker group. If you don’t mind your normal user being able to read and write arbitrary host files without much of a control at all, you can add yourself to that group. That’s frequently appropriate for something like a developer laptop; but on anything like a production system it deserves some real consideration of its security implications.
I am new to Docker and am experimenting with developing a Django App on Docker.
I have followed the example in this link here:
Currently I am developing my app and have made changes to various files within the web directory. For now in order to test my changes I have had to remove all my running containers, stop my docker machine, start my docker machine, attach docker machine, run docker-compose up. This is a timely process and is unproductive especially if I need to keep testing after small changes.
My question is if I make changes to the image (changes in the web directory) how can I update my container to reflect those changes or should I be doing things differently?
How do other people develop using Docker? what are your best practices?
You could use volumes to map host directory in container's web directory. Any changes in host directory will be reflected immediately without restarting container. See below post.
How to make a docker container directory accesible from host?
You can use docker-compose up --build to rebuild the image and container after making changes. It will automatically rebuild and restart any changed containers. There shouldn't be any reason to stop docker machine. If you are using a Mac or Windows PC, you can try the new beta app, which is a bit easier to use than prior versions.
Also see: https://docs.docker.com/compose/reference/
As for best practices, this probably isn't really the right forum unless you have a more specific question.
Currently I have 3 linode servers:
1: Cache server (Ubuntu, varnish)
2: App server (Ubuntu, nginx, rabbitmq-server, python, php5-fpm, memcached)
3: DB server (Ubuntu, postgresql + pg_bouncer)
On my app-server I have multiple sites (topdomains). Each site is inside a virtualenviroment created with virtualenvwrapper. Some sites are big with a lot of traffic, and some site are small with little traffic.
A typical site consist of python (django), celery (beat, flower) and gunicorn.
My current development pattern now is working inside a staging environment on the app-server and committing changes to git. Then change environment to the production environment and doing a git pull, and a ./manage.py migrate and restarting the process doing sudo supervisorctl restart sitename:, but this takes time! There must be a simpler method!
Therefore it seems like docker could help simplify everything, but I can't decide the best approach for how I could manage all my sites and containers inside each site.
I have looked at http://panamax.io and https://github.com/progrium/dokku, but not sure if one of them fit my needs.
Ideally I would run a development version of each site on my local machine (emulating cache-server, app-server and db-server), do code changes there and test them. When I would see the changes worked, I would execute a command that would do all the heavy lifting and send the changes to the linode servers (I would think mostly the app-server), do all the migration and restarting the project on the server.
Could anyone point me in the right direction as how to achieve this?
I have faced the same problem. I don't claim this is the best possible answer and am interested to see what others have come up with.
There doesn't seem to be any really turnkey solution on Docker yet.
It's also been frustrating that most of the 'Django+Docker' tutorials just focus on a single Django site, so they bundle up the webserver and everything in the same Docker container. I think if you have multiple sites on a server you want them to share a single webserver, but this quickly gets more complicated than presented in the tutorials, which are no longer much help.
Roughly what I came up with is this:
using Fig to manage containers and complicated Docker config that would be tedious to type as commandline options all the time
sites are Django, on uWSGI+Nginx (no reason you couldn't have others though)
I have a git repo per site, plus a git repo for the 'server'
separate containers for db, nginx and each site
each site container has it's own uWSGI instance... I do some config switching so I can either bring up a 'dev' container with uWSGI as acting standalone web server, or a 'live' container where the uWSGI socket is exposed to the main Nginx container, which then takes over as front-side web server.
I'm not sure yet how useful the 'dev' uWSGI servers are, I might switch to just running Django dev server and sharing my local code dir as a volume in the container, so I can edit and get live reloading.
In the 'server' repo I have all the shared Dockerfiles, for Nginx server, base uWSGI app etc.
In the 'server' repo I have made Fabric tasks to do my deployment (checkout server and site repos on the server, build docker images, run fig up etc).
Speaking of deployment, frankly I'm not much keen on the Docker Registry idea. This seems to mean you have to upload hundreds of megabytes of image file to the registry server each time you want to deploy a new container version. This sucks if you are on a limited bandwidth connection at the time and seems very inefficient.
That's why so far I decided to deploy new code via Git and build the new images on the server. I don't use a Docker Registry at all (apart from the public one for a base Ubuntu image). This seems to go against the grain of Docker practice a bit so I'm curious for feedback.
I'd strongly recommend getting stuck in and building your own solution first. If you have to spend time learning a solution like Dokku, Panamax etc that may or may not work for you (I don't think any of them are really ready yet) you may as well spend that time learning Docker directly... it will then be easier to evaluate solutions further down the line.
I tried to get on with Dokku early on in my search but had to abandon because it's not compatible with boot2docker... which means on OS X you're faced with the 'fun' of setting up your own VirtualBox vm to run the Docker daemon. It didn't seem worth the hassle of this when I wasn't certain I wanted to be stuck with how Dokku works at the end of the day.