Gunicorn --workers 3 --preload - django

We are running a django web application via Gunicorn + NGNIX (on a linux server). We find that the jobs we are executing via APScheduler are being executed 3 times each (which is undesirable). We solved the problem on our local computers, however, the problem remains when being run via Gunicorn.
This was the most informative thread:
Make sure only one worker launches the apscheduler event in a pyramid web app running multiple workers
Our problem is that we want to execute the following code: env/bin/gunicorn module_containing_app:app -b 0.0.0.0:8080 --workers 3 --preload, we are unsure about two things:
We have a domain "example.com", should this be taken into consideration?
We are trying to find the env/bin/gunicorn folder/path with no luck. We are also trying to understand what to replace module_containing_app:app with because of this.

Related

How to easily debug Redis and Django/Gunicorn when developing using Docker?

I'm not referring to more sophisticated debugging techniques, but how to get access to the same kind of error messages that are normally directed to terminal tabs?
Basically I'm adopting Docker in a Django project also using Redis.
In the old way of working I opened a linux terminal tab for gunicorn like this: gunicorn --reload --bind 0.0.0.0:8001 myapp.wsgi:application
And this tab kept running Gunicorn and any Python error was shown in this tab so I could see the problem and fix it.
I could also open a second tab for the celery woker: celery -A myapp worker --pool=solo -l info
The same thing happened, the tab was occupied by Celery and any Python error in a task was shown in the tab and I could see the problem and correct the code.
My question is: Using docker is there a way to make each of the containers direct these same errors that would previously go to the screen, to go to log files so that I can debug my code when an error occurs in Python?
What is the correct way to handle simple debugging during development using Docker containers?
After looking more about this in the docker documentation I found a link that solves this problem: View logs for a container or service
Basically the command "docker logs CONTAINER_ID" shows on the screen exactly what we would see in the terminal running the application.
Works perfectly to see Django, Redis and Angular logs.
Just type:
docker logs CONTAINER_ID
Replace the container_id keyword with the real id of the container you want to log in.
To find the id type:
docker ps

Gunicorn --preload option is causing workers to hang?

We have a flask app that uses a lot of memory for ML models, and I'm trying to reduce the memory footprint by using gunicorn's preload option, but when I add the --preload flag, and deploy that (with -w 4, to a docker container running on GKE), it will handle just a few requests, and then hang until it times out, at which point gunicorn will start another worker to replace it and the same thing will happen. It's not clear yet how many requests each worker will process before hanging (possibly just 1... possibly a few)
The timeout is over 10 minutes, so it seems to be hanging indefinitely.
This does not happen at all if I remove the --preload flag.
What is it about the --preload flag that could be causing the workers to hang indefinitely?

Ajax not working with Django after some time

I have a Django 2.0 application hosted on VPS and is run by running command in SSH terminal
/root/.local/bin/pipenv run
/home/user/.local/share/virtualenvs/example.com-IuTkL8w_/bin/gunicorn
myapp.wsgi:application --timeout 300 --workers 1
--log-level=DEBUG &
Till few hours of running the application server using above command, Ajax requests works fine but after few hours it constantly fails.
There are many processes running behind that Ajax request but could not figure out where the request is breaking, since logs displays in console until SSH is live after running the above command. And there is no mean to check for console log after terminating SSH and logging again.
Killing all running process and restarting serving using above command again starts working fine for few hours.
1. What could be the reason for this ghost issue?
2. Is there some way to view console log in between anytime after SSH login?
3. If not, how can I set the server to restart automatically periodically (since, it is working again after restarting the server)?
Edit 2
I see in browser's network console, it is giving
[Errno 5] Input/output error
on print() statement.
I have bunch of print() statements to see output in console like
print('----check_url')
print(check_url)
print('----product_id')
print(product_id)
Edit 3
I have following line in the code
with open(joined_path_with_file, 'wb') as f:
f.write(r.content)
Is the issue with this?

Simple docker example only appears to expose db container and not web

I clone this repo (it's pretty much based on docker docs here) and run docker-compose up. Docker builds the 2 containers and I see the output from db_1 (psql looks to be completely ready) but nothing at all from web_1, no output whatsoever.
I go to my host IP + 8000 and nothing is running there. I am using docker toolbox for mac. It's pretty much the simplest possible example of using Docker - any idea why I'm not seeing anything from my Django container?
Thanks in advance,
it might be possible that STDOUT of the web_1 Container is mapped only to display WARN and ERROR level. You say youre using Docker Toolbox for Mac? Have you tried to reach the Website over the IP of the DockerToolBox VM or the HostIP? Im not quite aware with DockerToolbox since there is an native MacClient (https://docs.docker.com/engine/installation/mac/). Maybe try to reach the DockerToolboxIp not HostIP. I would also recommend to use Docker for Mac native, since i had problems with the ToolBox but none with the "Native" Client.
Hope i could Help
After taking a better look to the documentation I was able to start your containers.
After the git clone:
cd sane-django-docker
docker-compose up -d
This is the output
Starting sanedjangodocker_db_1
Starting sanedjangodocker_web_1
[root#localhost sane-django-docker]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
cde9e93c1a70 sanedjangodocker_web "python3 manage.py ru" 19 seconds ago Up 1 seconds 0.0.0.0:8000->8000/tcp sanedjangodocker_web_1
73ad8cafe798 postgres:9.4 "/docker-entrypoint.s" 20 seconds ago Up 1 seconds 5432/tcp sanedjangodocker_db_1
When I just performd docker-compose up (running in the forground I saw this issue).
LOG: shutting down
LOG: database system is shut down
After taking a better look in the documentation I saw the problem
Django will complain about the postgres database not existing so we'll
create one:
docker exec sanedjangodocker_db_1 createdb -Upostgres webapp
Now the postgres is fine but I had to restart the webapp to find the db.
docker restart sanedjangodocker_web_1
Now I'm able to acces it on IP:8000
It worked!
Congratulations on your first Django-powered page.
I don't know how the django app really works but the setup is pretty strange.

Is supervisord needed for docker+gunicorn+nginx?

I'm running django with gunicorn inside docker, my entry point for docker is:
CMD ["gunicorn", "myapp.wsgi"]
Assuming there is already a process that run the docker when the system starts and restart the docker container when it stops, do I even need to use supervisord? if gunicorn will crash won't it crash the docker and then restart?
The only time you need something like supervisord (or other process supervisor) in a Docker container is if you need to start up multiple independent processes inside the container when the it starts.
For example, if you needed to start both nginx and gunicorn in the same container, you would need to investigate some sort of process supervisor. However, a much more common solution would be to place these two services in two separate containers. A tool like docker-compose helps manage multi-container applications.
If a container exits because the main process exits, Docker will restart that container if you configured a restart policy when you first started it (e.g., via docker run --restart=always ...).
The simple answer is no. And yes you can start both nginx and gunicorn in the same container. You can either create a script which executes everything your container needs to run and start it with CMD at the end of your Dockerfile. Or you can combine everything like so:
CMD (cd /usr/src/app && \
nginx && \
gunicorn wsgi:application --config ../configs/gunicorn.conf)
Hope that helps!