I am deploying my Django web application using Docker and Docker Compose, Nginx and Postgresql. I have already done the deployment and it works well. But I have a problem with the default media files.
When I was deploying in local and in my server, I add those default media files manually to the media folder, but now with Docker I can't do that.
One of my models would be that one, which has a default image "dictionary/default.png" and obviously when I run my docker image doesn't load because it doesn't exist anywhere.
class Dictionary(models.Model):
dictionary_name = models.CharField(max_length=200)
dictionary_picture = models.ImageField(upload_to='dictionary', default='dictionary/default.png')
I was thinking to have these default media files locally and copy them to the Docker media folder in the Dockerfile, but I do not know if this is possible.
There is a COPY instruction in the Dockerfile.
For example, copy all files from the dictionary to the media:
...
COPY dictionary/ /app/media/
...
If you already set WORKDIR /app:
...
COPY dictionary/ ./media/
...
Related
The code below works on local when I don't use Docker containers. But when I try to run the same code/project in Docker, while everything else is working great, the part of the code below doesn't work as expected and doesn't save the ContentFile as image file to the disk. The ImageField returns the path of the image but actually it doesn't exist.
from django.core.files.base import ContentFile
...
photo_as_bytearray = photo_file.get_as_bytearray() # returns bytearray
cf = ContentFile(photo_as_bytearray)
obj.photo.save('mynewfile.jpg', cf) # <<< doesn't create the file only on Docker
Not sure if there's something I need to change for Docker.
First of all, the origin of the problem was my Celery worker. It was using the same image of my web container, but...
While my web container has a volume associated with the web folder (./web:/path/to/app) to support hot reloading in development environment, the celery worker was not using this volume. I thought that using the same image (mywebimage:dev) for both would be enough but was not.
So now I'll edit my docker-compose file to make them both use the same (real same) files. Celery was using a copy of the web directory because of the COPY statement in the Dockerfile, not the actual directory I work on and edit. So when the celery worker created a file, it wasn't being created on the volume which has the actual web directory.
Hope that helps someone made the same mistake I did.
I am using Django as a Web Framework. Azure NGINEX as WebServer. My Project is deployed with Docker Containers.
In my Django project root structure will be as follows:
root:
- app1
- app2
- media
whenever saving images, it will correctly save under media folder. But whenever doing "docker-compose up" it will replaces the source code, so that my media folder will be cleaned up everytime.
In my settings.py file, I have added as follows:
MEDIA_ROOT = os.path.join(BASE_DIR,'media')
MEDIA_URL = 'media/'
Kindly help me to maintain the media files with Docker based Environment
As mentioned in the previous answer, this isn't a best practice to handle media and static inside your project or app directory rather you can use a file or file storage server. But I am trying to give the answer to your question here.
Suppose you have a Django project directory named root and inside this, you are managing the media and static folders. As you are using Docker every time your container gets restarted, it wipes out the contents from both these folders. So what you have to do here is mount your media and static folders from inside of your container to your local storage i.e /var/lib/docker/volumes/{volume_name}/_data to persist your media and static files in between restarts of your container.
I am describing here the docker-compose version:
version: "3.8"
services:
app:
image: {your django project image}
#build: {direct build of your Dockerfile}
volumes:
- media:/src/media/
- static:/src/static/
volumes:
media:
static:
My goal here is to point out volume mount, so in the above code, you have to define volumes this way, where /src is the working directory defined in your Dockerfile using the WORKDIR directive. And in my case media and static are the direct children of the src folder. Now you can run docker volume ls to see your volume names and using the name you can inspect your volumes using this command docker volume inspect {volume_name}. Usually, you will find your volumes i.e media and static here -
/var/lib/docker/volumes/{{container_name]_media}/_data
/var/lib/docker/volumes/{{container_name]_static}/_data
Hope this clears the question.
You have to basically do 2 things:
Changing the directory to keep the media files to somewhere out of your source code directory - it's not any good practice to keep them in the source directory because of several reasons,
Using docker volumes in your docker-compose configuration file to persist your media directory. You can find the detailed documentation on the way to configure volumes in docker-compose here.
So I was finally able to set up local + prod test project I'm working on.
# wsgi.py
from dj_static import Cling, MediaCling
application = Cling(MediaCling(get_wsgi_application()))
application = DjangoWhiteNoise(application)
I set up static files using whitenoise (without any problems) and media (file uploads) using dj_static and Postgres for local + prod. Everything works fine at first... static files, file uploads.
But after the Heroku dynos restart I lose all the file uploads. My question is, --- Since I'm serving the media files from the Django app instead of something like S3, does the dyno restart wipe all that out too?
PS: I'm aware I can do this with AWS, etc, but I just want to know if thats the reason I'm losing all the uploads.
Since I'm serving the media files from the Django app instead of something like S3, does the dyno restart wipe all that out too?
Yes!. That's right. According to the Heroku docs:
Each dyno gets its own ephemeral filesystem, with a fresh copy of the most recently deployed code.
See, also this answer and this answer.
Conclusion: For media files (the uploaded ones), you must use some external service (like S3 or something). whitenoise is just for static files. See here why whitenoise is not suitable for serving user-uploaded (media) files.
I have a docker image for running a django app. If I mount the dir containing the django app when I create the container it works fine. But I want to make the image self-contained and not dependent on the local file system. So I changed the Dockerfile to copy the dir containing the django app from the host machine into the image. But then, when I create the container (without mounting the dir) I get permission denied on all accesses to that dir (e.g. the socket, the static files, ...). Everything is world readable and executable. Anyone have any clues as to what could be causing this?
I ended up fixing it. Turned out one of the dirs in the path was not readable. That is, the django app was in /foo/bar/baz and although /foo and /foo/bar/baz were readable, /foo/bar was not. Once I chmod-ed that all was well.
I'm trying to find a workflow with Docker and Django. Currently, I'm using the basic configuration from the docker documentation.
I'd like to use manage.py startapp directly from the container to start a new app using:
docker-compose run web ./manage.py startapp myapp
But all the files created in the volume are owned by the root user and not by myself, so I can't edit them from the host.
My idea is to avoid installing all the requirements on my host machine but maybe I should not create app from the container?
One possible solution is to create a user and make it having the same UID/GID than my user on my host machine but it won't work if I try to use an other account on my host machine...
Any suggestion?
What worked best for me was avoiding (or minimizing) file creation inside the containers.
My Dockerfile would just copy the requirements.txt and install them;
and the container would access the app files through a mounted volume.
I pass the env var PYTHONDONTWRITEBYTECODE=1 to the containers, so python does not create *.pyc/*.pyo files.
The few times I cannot avoid it (like, ./manage.py makemigrations), I run chown afterwards.
It's not ideal, but as this happens rarely for my case, I don't bother.