Mount host system /app as read only inside docker container - django

How would you mount the host systems /app directory as read only within the docker container?
Use case: developing a Django application running inside a docker container, and you want Django to reload each time changes are made to /app code on the host system.

Creating a readonly bind mount is documented under Use a read-only bind mount.
In your case the command will look something like:
docker run -v $(pwd)/app:/app:ro ...

Related

How to run a docker image from within a docker image?

I run a dockerized Django-celery app which takes some user input/data from a webpage and (is supposed to) run a unix binary on the host system for subsequent data analysis. The data analysis takes a bit of time, so I use celery to run it asynchronously. The data analysis software is dockerized as well, so my django-celery worker should do os.system('docker run ...'). However, celery says docker: command not found, obviously because docker is not installed within my Django docker image. What is the best solution to this problem? I don't want to run docker within docker, because my analysis software should be allowed to use all system resources and not just the resources assigned to the Django image.
I don't want to run docker within docker, because my analysis software should be allowed to use all system resources and not just the resources assigned to the Django image.
I didn't catch the causal relationship here. In fact, we just need to add 2 steps to your Django image:
Follow Install client binaries on Linux to download the docker client binary from prebuilt, then your Django image will have the command docker.
When starting the Django container, add /var/run/docker.sock bind mount, this allows the Django container to directly talk to the docker daemon on the host machine and start the data-analysis tool container on the host. As the analysis container does not start in Django container, they can have separate system resources. In other words, the analysis container's resources do not depend on the resource of the Django image container.
Samples with one docker image which already has the docker client in it:
root#pie:~# ls /dev/fuse
/dev/fuse
root#pie:~# docker run --rm -it -v /var/run/docker.sock:/var/run/docker.sock docker /bin/sh
/ # ls /dev/fuse
ls: /dev/fuse: No such file or directory
/ # docker run --rm -it -v /dev:/dev alpine ls /dev/fuse
/dev/fuse
You can see, although the initial container does not have access to the host's /dev folder, the docker container whose command initialized from the initial container could really have separate resources.
If the above is what you need, then it's the right solution for you. Otherwise, you will have to install the analysis tool in your Django image

docker-compose same config for dev and production but enable code sharing between host and container only in development

As the most important benefit of using docker is to keep dev and prod env to be the same so let's rule out the option of using two different docker-compose.yml
Let's say we have a Django application, and we use gunicorn to serve for production and we have a dedicated apache2 as a reverse proxy(this apache2 is out of docker by design). So this application(docker-compose) has only two parts, web(Django) and db(mysql). There's nothing wrong with the db part.
For the Django part, the dev routine without docker would be using venv and python3 manage.py runserver or whatever shortcut that an IDE provides. We can happily change our code, the dev server is smart to pick up and change and reflect in no time.
Things get tricky when docker comes in since all source code should be packed into the image, this gives our dev a big overhead of recreating the image&container again and again. One might have the following solutions(which I found not elegant):
In docker-compose.yml use volume to mount source code folder into the container, so that all changes in the host source code folder will automatically reflect in the container, then gunicorn will pick up the change and reflect. --- This does remove most of the recreating container overhead, but we can't use the same docker-compose.yml in production as this introduces a dependency to the source code on the host server.
I know there is a command line option to mount a host folder to the container, but to my knowledge, this option only exists in docker run not docker-compose. So using a different command to bring the service up in different env is another dead end. ( I am not 100% sure about this as I'm still quite new to docker, please correct me if I'm wrong)
TLDR;
How can I set up my env so that
I use only one single docker-compose.yml for both dev and prod
I'm able to dev with live changes easily without recreating docker container
Thanks a lot!
Define your django service in docker-compose.yml as
services:
backend:
image: backend
Then add a file for dev: docker-compose.dev.yml
services:
backend:
extends:
file: docker-compose.yml
service: backend
volume: local_path:path
To launch for prod, just docker-compose up
To launch for dev
docker-compose -f docker-compose.yml -f docker-compose.dev.yml up
To hot reload dev django app, just reload gunicorn ps aux | grep gunicorn | grep greencar_proj | awk '{ print $2 }' | xargs kill -HUP
I have also liked to jam as much functionality into a single docker-compose.yml file. A few strategies I would consider:
define different services for prod and dev. So you'll run docker-compose up dev or docker-compose up prod or docker-compose run dev. There is some copying here but usually not a lot.
Use multiple docker-compose.yml files and merge them. eg: docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d. More details here: https://docs.docker.com/compose/extends/
I usually just comment out my volumes section, but that's probably not the best solution.

How to copy data from docker container to ECS on startup (AWS)?

I have two containers, one is web-server based on Node.JS with assets directory. Another container is nginx which proxify page requests to web-server and getting statics from assets directory.
I created AWS cluster, EC2 instance, built and pushed docker images to registry, made tasks to deploy my applications, but I can't share with assets directory to nginx because directory is not part of this container.
So to solve my problem I figured out to create EFS and attach the volume, add permissions to ec2-user and makes directory available by path /var/html/assets.
Cool and how to copy assets content from my web-server docker container to /var/html/assets?
I want to make it public / shared because soon I will make additional servers which should also place assets to this common directory.
The process should be automized and work on each deployment, guys, any suggestions? Thanks!
To copy assets content from your web-server docker container to your host machine,
say you want to save your assets content from container to /var/html/assets on host machine, use this command to run your container:
docker run --name=nginx -d -v ~/var/html/assets:[Your Container path] -p 5000:80 nginx
-v ~/var/html/assets:[Your Container path] Sets up a bindmount volume that links [Your Container path] directory from inside the Nginx container to the ~/var/html/assets directory on the host machine. Docker uses a : to split the host's path from the container path, and the host path always comes first.
Hope it will help!
I solved problem by making host directory accessible for writing chmod 777 /var/html/assets, then added a volume which is looking to host directory and applied it to web and nginx containers. When running the web container, it invokes cp instruction to copy assets to mount directory (host directory). Nginx will see populated directory and can use it.
Note: It's a temporary / workaround solution, giving xrw access to directory is not a good way because of security.

Docker: How to rely on a local db rather than a db docker container?

Overall: I don't want my django app to rely on a docker container as my db. In other words, when I run an image
docker run -p 8000:8000 -d imagename
I want this to know to connect to my local db. I have my settings.py configured to connect to a pg db, so everything is fine when I do something like
python manage.py runserver
Feel free to call out any incorrect use of certain terms or just my overall understanding of docker. So from all the tutorials I've seen they usually make a docker compose file that is reliant on a db-image that will be spun up in a container separate from the web app. Examples of things I've gone through:
https://docs.docker.com/compose/django/#connect-the-database, http://ruddra.com/2016/08/14/docker-django-nginx-postgres/, etc. At this point I'm extremely lost, since I don't know if I do this in my Dockerfile, settings.py in my project, or docker-compose.yml (I'm guessing I shouldn't even have this since this is for a multi-container app which I'm trying to avoid). [Aside: Lastly, can one run a django app reliant on celery & rabbitmq in just one container? Like my pg example I only see instances of having them all in separate containers.] As for my Dockerfile it's pretty much this.
FROM python:3
ENV APP 'http://githubproj`
RUN git clone $APP \
&& cd $APP/projectitself \
&& pip install -r requirements.txt
CMD cd $APP_DIR/mydjangoproject && gunicorn mydjangoproject.wsgi:application --bind 0.0.0.0:8000
In order to allow your containerized Django application to connect to a local database running in the host machine, you need to enable incoming connections from your docker interface. You do that by including the following rule in your iptables in your local machine:
$ sudo iptables -A INPUT -i docker0 -j ACCEPT
Next, you have to configure your postgres server to listen on multiple addresses. Open /etc/postgresql/<version>/main/postgresql.conf and search for a line containing listen_addresses='localhost, and change that for:
listen_addresses='*'
After these changes, you should be able to connect to your local postgres database from inside the container.
This answer might give you further clarifications on how to connect to your local machine from your container.
To connect from the container to the host, you can you use the IP address of the docker0 bridge. Run ifconfig on the host to find the docker0 IP address (default is 172.17.0.1 I believe). Then connect to that from inside your container.
This is obviously not super host-portable as that IP might change between machines, so a wrapper script might be useful to find and inject that IP into the container.
Better still, postgres in a container! :p
Also, if connecting to a remote postgres then just provide the IP of the remote instance (no different to regular inter-connectivity here).

Sharing directories in a Docker container both with a Dockerfile and after the container is running

Sharing data between a running docker container and my host (on AWS) seems overly complicated. From the docker documentation it seems as if I need to specify volumes when I start the container.
I found this: https://github.com/synack/docker-rsync
But this watches recursively to copy only from the host machine to the docker container
I'm looking for a way to create (preferably in a Dockerfile) a folder visible on my host machine on AWS where I can scp files into that folder and they will be visible on my docker container. I am also looking for my docker image to be able to write to that folder so if the container is stopped I won't lose those files.
As a side note I already declared in my Dockerfile to
VOLUME /Training-master
but I don't know how to access it from my machine and when I stopped the container I lost the data.
Does anyone know how to do this or can they point me in the right direction?
What you are looking for is provided by docker run time options. Documented here: http://docs.docker.com/engine/userguide/dockervolumes/#mount-a-host-directory-as-a-data-volume
At the end of it, its clearly mentioned
Note: The host directory is, by its nature, host-dependent.
For this reason, you can’t mount a host directory from Dockerfile
because built images should be portable. A host directory wouldn’t
be available on all potential hosts.
Like Raghav said a drive cannot be created and shared from a Dockerfile because of image portability.
But after you create the image you can run this command and this will create a shared folder between host and docker. Be careful because you can overview a directory in the docker container if it has the same name as an existing folder:
$ sudo docker run -itd -v /home/ubuntu/Sharing dockeruser/imageID:version bash
/home/ubuntu/Sharing -- Path to sharing folder on host computer
/Share -- Path to sharing folder in my container
dockeruser/imageID:version -- the name of your container
-v -- specifies you are creating a volume
-d -- daemonizes the containe, puts it in the background
bash -- the command for the container to execute
Just for reference for Windows users:
1) You can mount a host folder into a container by
docker run -ti -v C:\local_folder\:c:\container_folder container1
2) Alternatively, you can create a volume:
docker volume create --name temp_volume
See the absolute path of the volume by:
docker volume inspect temp_volume
The mountpoint is the absolute path of the volume. You can add/remove files from that path. Then you can mount it to the container by:
docker run -ti -v temp_volume:c:\tmploc container1
Notice that both host and container are Windows machines.