I have a docker image for running a django app. If I mount the dir containing the django app when I create the container it works fine. But I want to make the image self-contained and not dependent on the local file system. So I changed the Dockerfile to copy the dir containing the django app from the host machine into the image. But then, when I create the container (without mounting the dir) I get permission denied on all accesses to that dir (e.g. the socket, the static files, ...). Everything is world readable and executable. Anyone have any clues as to what could be causing this?
I ended up fixing it. Turned out one of the dirs in the path was not readable. That is, the django app was in /foo/bar/baz and although /foo and /foo/bar/baz were readable, /foo/bar was not. Once I chmod-ed that all was well.
Related
Why do I have to restart my docker container every time I make a change in my Django python files? I'm running a Django app via Docker and it seems any changes I make in my Views(and possibly elsewhere) are not reflected until I restart my container.
For example, if I log an output to the terminal or make changes, then refresh, there's no change. If I restart my container and then refresh, I get the result I would expect.
As I know you can consider Docker environment like immutable type doesn’t allow any change in the object once it has been created.
And that will give us more security.
If you want a solution there is some vs code extensions can run script contain some command after you edit your files, you can insert docker command in this script to reload environment after you edit any file.
The code below works on local when I don't use Docker containers. But when I try to run the same code/project in Docker, while everything else is working great, the part of the code below doesn't work as expected and doesn't save the ContentFile as image file to the disk. The ImageField returns the path of the image but actually it doesn't exist.
from django.core.files.base import ContentFile
...
photo_as_bytearray = photo_file.get_as_bytearray() # returns bytearray
cf = ContentFile(photo_as_bytearray)
obj.photo.save('mynewfile.jpg', cf) # <<< doesn't create the file only on Docker
Not sure if there's something I need to change for Docker.
First of all, the origin of the problem was my Celery worker. It was using the same image of my web container, but...
While my web container has a volume associated with the web folder (./web:/path/to/app) to support hot reloading in development environment, the celery worker was not using this volume. I thought that using the same image (mywebimage:dev) for both would be enough but was not.
So now I'll edit my docker-compose file to make them both use the same (real same) files. Celery was using a copy of the web directory because of the COPY statement in the Dockerfile, not the actual directory I work on and edit. So when the celery worker created a file, it wasn't being created on the volume which has the actual web directory.
Hope that helps someone made the same mistake I did.
We are facing a very weird issue. We ship a django application in a docker container through Github Actions on each push. Everything is working fine except collectstatic.
We have the following lines at the end of our CD github action:
docker exec container_name python manage.py migrate --noinput
docker exec container_name python manage.py collectstatic --noinput
migrate works perfectly fine, but collectstatic just keeps on waiting if ran through the github action. If I run the command directly on the server then it works just fine and completes with in few minutes.
Can someone please help me figuring out what could be the issue?
Thanks in advance.
Now I am far from the most experienced but I did this recently and I have some suggestions of where to look. I'm definitely not the greatest authority though.
I wasn't using docker so I can't say anything about that. From the issues, I had here are some suggestions I can recommend to try.
Take note that all of this was for a self-hosted runner. Things would be very different otherwise.
Check to make sure STATIC_ROOT and MEDIA_ROOT variables are set correctly in the settings file.
If the STATIC and MEDIA root variables are environment variables make sure you are serving the correct environment variables file like a .env file which I used.
I used django-environ to serve my environment variables. From the docs, it says to have the .env file in the same directory as the settings file. Well if you are putting the project on a production server with github actions, you won't be able to put the .env file anywhere in the project because it will get overwritten every time new code is pushed.
So to fix that you need to specify the correct .env file from somewhere else on the server. Do that by specifying ENV_PATH.
https://django-environ.readthedocs.io/en/latest/
Under the section Multiple env files
Another resource that was helpful:
https://github.com/joke2k/django-environ/issues/143
I set up my settings file like how they did there.
I put my .env file in a proj directory I made in the virtualenvironment folder for the project.
I don't know if it's a good place to put it but that's how I did it. I didn't find much great info online for this stuff. Had to figure out a lot on my own.
Make sure the user which is running the github action has permissions to read the .env file.
Also like .env file, if you have the static files being collected into the base directory of your project you might have an issue with github actions overwriting those files every time new code is pushed. If you have a media directory where the user uploads files to then that will really be an issue because those files won't get overwritten. They'll just disappear.
Now if this was an issue it shouldn't cause github actions to just get stuck on the collect static command. It would just cause files to get overwritten every time the workflow runs and the media files will disappear.
If you do change the directory of where the static and media files are located as stated before, make sure all the variables for the paths are correct in the settings file and the .env file.
You will also need to update the nginx config file for the static and media root directories if you used nginx. Not sure about how apache does this.
You can do that with this command:
sudo nano /etc/nginx/sites-available/myproject
Don't forget to restart the nginx server after doing that.
If you are writing static and media files at a different location from the base project directory on the server, also check permissions on those directories. Make sure the user running the github action has permissions to write to those directories. I suspect that might cause it to hang but it very well might just cause an error.
Check all the syntax in the github actions yml file. Make sure everything is correct and it's not hanging cause it had an incomplete command or something like that.
But yeah, that's some things I had to take a look at. Honestly, none of this might be relevant for you. All of these issues should cause an error somewhere for the most part.
I couldn't really offer many external resources for you to look deeper into this because I'm just speaking from personal experience.
Hope I could help.
Heres my github repo for the project I did: https://github.com/pkudlanov/personal-portfolio-django
I hosted it on digitalocean on a linux server using nginx and gunicorn.
I'm trying to migrate a Django application from Google Kubernetes Engine to Google Cloud Run, which is fully managed. Basically, with Cloud Run, you containerize your application into a single Dockerfile and Google does the rest.
I have a Dockerfile which at one point does call a bash script via ENTRYPOINT
But I need to start Nginx and start Gunicorn. The Google Cloud Run documentation suggest starting Gunicorn like this:
CMD gunicorn -b :$PORT foo.wsgi
(let's sat my Django app is named "foo")
But I also need to start Nginx via:
CMD ["nginx", "-g", "daemon off;"]
And since only one CMD is allowed per Dockerfile, I'm not sure how to combine them.
To try to get around some of these difficulties, I was looking into using a Dockerfile to build from that already has both working and I came across this one:
https://github.com/tiangolo/meinheld-gunicorn-docker
But the paths don't quite match mine. Quoting from the documentation of that repo:
You don't need to clone the GitHub repo. You can use this image as a
base image for other images, using this in your Dockerfile:
FROM tiangolo/meinheld-gunicorn:python3.7
COPY ./app /app It will expect a file at /app/app/main.py.
And will expect it to contain a variable app with your "WSGI"
application.
My wsgi.py file ends up at /app/foo/foo/wsgi.py and contains an application named application
But if I understand that documentation correctly, when it says it will expect the WSGI application to be named app and to be located at /app/app/main.py it's basically saying that I need to revise the path and the variable name so that when it builds the image it knows that app is called application and that instead of finding it at /app/app/main.py it will find it at /app/foo/foo/wsgi.py
I assume that I can fix the app vs application variable name by adding a line to my wsgi.py file like app = application but I'm not sure how to correct the path that Docker expects.
Can someone explain to me how to adapt this to my needs?
(Or any other way to get it working)
Target WSGI script not found or unable to stat: /opt/python/current/app/application.py
I contain my app in a file called application.py, and my application's configuration looks like this:
I also tried uploading the sample app that AWS provides, which only contains 'application.py`, and yet I still get this error.
What could be causing the error?
For me, it was this silly thing. In my mac, I compressed by right-clicking on the folder/repository and compressing it to zip. However, a zip like that extracts to open another folder within it which contains the application. As a result, EBS is unable to locate application.py.
The simple fix hence was to select all the individual files inside the folder to create the zip file for uploading (or using the EB CLI to upload).
I had a similar issue. You should put your application.py in root directory as your WSGIPath suggests, or change your WSGIPath in .elasticbeanstalk/optionsettings.yourappname-env.
For me, I had my app instance stored in a variable called app, which wasn't recognised by Elastic Beanstalk. As soon as I changed the variable to application, it started working.
# In application.py or manage.py, after initialising the app
application = app
should do the trick.
Use application instead of app or any other variable you are using.
application = Flask(__name__)