djangochannelsrestframework problem with elastickbeanstalk and #model_observer - django

I have a consumers in which there is tracking of model changes using #model_observer.
I subscribe to the event via #action and track its changes, everything works perfectly locally, the model has changed, I immediately received a change. But as soon as I try to put it on aws elastic beanstalk, I can subscribe/unsubscribe, but I don't get changes about events, although I see that the session is not broken.
I thought that the problem was in queues or sessions, but I checked this together with technical support, they told me that everything was working correctly and the connection was made correctly.
Maybe you know at least in which direction I should look and dig?
Just in case, I summarize: everything works correctly locally, when uploading to the server, only subscribe to the event / unsubscribe from the event works, but for some reason the changes do not work
my consumer.py with #model_observer
my django.conf
my Procfile:
web: gunicorn --bind :8000 --workers 3 --threads 2 settings.wsgi:application websocket: daphne -b 0.0.0.0 -p 5000 settings.asgi:application
I watched redis, I watched DB, cleaned up sessions

Related

APScheduler running multiple times for the amount of gunicorn workers

I have a django project with APScheduler built in it. I have proceeded to the production environment now so binded it with gunicorn and nginx in the proceess. Gunicorn has 3 workers. Problem is that gunicorn initiates the APScheduler for each worker and runs the scheduled job 3 times instead of running it for only once.
I have seen similar questions here it seems it is a common problem. Even the APScheduler original documentation acknowledges the problem and tells no way of fixing it.
https://apscheduler.readthedocs.io/en/stable/faq.html#how-do-i-share-a-single-job-store-among-one-or-more-worker-processes
I saw in other threads people recommended putting --preconfig in the settings. But I read that --preconfig initiates the workers with the current code and does not reload when there has been a change in the code.(See "when not to preload" in below link)
https://www.joelsleppy.com/blog/gunicorn-application-preloading/
I also saw someone recommended binding a TCP socket for the APScheduler. I did not understand it fully but basically it was trying to bind a socket each time APScheduler is initiated then the second and third worker hits that binded socket and throws a socketerror. Sort of
try:
"bind socket somehow"
except socketerror:
print("socket already exists")"
else:
"run apscheduler module"
configuration. Does anyone know how to do it or know if that would actually work?
Another workaround I thought is simply removing the APScheduler and do it with cron function of the server. I am using Digital Ocean so I can simply delete the APScheduler and a cron function that will run the module instead. However, I do not want to go that way because that will make break the "unity" of the whole project and make it server dependable. Does anyone have any more ideas?
Schedule module:
from apscheduler.schedulers.background import BackgroundScheduler
from RENDER.views import dailypuzzlefunc
def start():
scheduler=BackgroundScheduler()
scheduler.add_job(dailypuzzlefunc,'cron', day="*",max_instances=2,id='dailyscheduler')
scheduler.start()
In the app:
from django.apps import AppConfig
class DailypuzzleConfig(AppConfig):
default_auto_field = "django.db.models.BigAutoField"
name = "DAILYPUZZLE"
def ready(self):
from SCHEDULER import dailypuzzleschedule
dailypuzzleschedule.start()
web:
python manage.py collectstatic --no-input;
gunicorn MasjidApp.wsgi --timeout 15 --preload
use --preload.
It's working well for me.

Django celery and heroku

I have configured celery to be deployed in heroku, all Is working well, in fact in my logs at heroku celery is ready to handle the taks. Unfortunately celery doesn't pick my tasks I feel there is some disconnection, can I get some help?
If celery isn't picking up task that means that nothing is talking to its broker. Make sure that the task producer is talking to the same broker url as the celery worker (the broker url will appear in the first 10-15 lines of the celery logs).

Deploying Django to production correct way to do it?

I am developing Django Wagtail application on my local machine connected to a local postgres server.
I have a test server and a production server.
However when I develop locally and try upload it there is always some issue with makemigration and migrate e.g. KeyError etc.
What are the best practices of ensuring I do not get run into these issues? What files do I need to port across?
so ill tell you what i do and what most of the companies that i worked as a django developer did and i can tell you by experience that worked pretty well.
First containerize your application, this will make your life much more easy and you will remove external influence in your code, also will get you an easy way to reproduce your environment.
Your Dockerfile should be from some python image and should do 3 basically things:
Install your requirements dependencies
Run the python manage.py migrate --noinput command
Run a http server such as gunicorn with gunicorn -c /gunicorn.py wsgi:application
You ill do the makemigration in your local machine and make sure that everything is working before commit then to the repo.
In your gunicorn.py you ill put your settings to run the app such as the number of CPU, the binding port, the folder that your app is, something like:
import os
import multiprocessing
# Chdir to specified directory before apps loading.
# https://docs.gunicorn.org/en/stable/settings.html#chdir
chdir = '/app/'
# Bind the application on localhost both on ipv6 and ipv4 interfaces.
# https://docs.gunicorn.org/en/stable/settings.html#bind
bind = '0.0.0.0:8000'
Second containerize your other stuff, for example the postgres database, the redis (for cache), a connection pooler for the database depending on the size of your application.
Its highly recommend that you have a step in the pipeline to do tests, they need to run before everything, maybe just after lint
Ok what now? now you need a way to deploy that stuff, the best for that scenario is: pull your image to github registry, and you can add a tag to that for example:
IMAGE_ID=ghcr.io/${{ github.repository_owner }}/$IMAGE_NAME
# Change all uppercase to lowercase
IMAGE_ID=$(echo $IMAGE_ID | tr '[A-Z]' '[a-z]')
docker tag $IMAGE_NAME $IMAGE_ID:staging
docker push $IMAGE_ID:staging
This can be add in a github action in the build step for example.
After having your new code in a new image inside github you just need to update the current one, this can be done by creaaing a script to do it in the server and running that script from github action, is something like:
docker pull ghcr.io/${{ github.repository_owner }}/$IMAGE_NAME
echo 'Restarting Application...'
docker stop {YOUR_CONTAINER} && docker up -d
sudo systemctl restart nginx
echo 'Cleaning old images'
sudo docker system prune -af
You can see that i create the image with a staging tag, you can create a rule in github actions for example to trigger that action when you create a new release for example, and create another action to be trigger in every new commit and build/deploy for a dev tag.
For the migration problem, the first thing is, when your application go live squash every migration to the first one (you can drop the database and all the migration then create the database and run the makemigration command again to reach this), so you can have a clean migration in the server. Never creates unnecessary relation between the tables, prefer always doing cached properties instead of add new columns, use UUID for unique ids, and try to not do breaking changes in the database, its hard but if you plan the database before is not so difficult to do.
Hit me if you have any questions. A lot of the stuff that i said can be done in a lot of other platforms such as gitlab, travis, circle ci, but i use the github action in the example because i think is more simple to picture.
EDIT:
I forgot to tell you to have a cron in your server doing backups of your databases, the migrate command ill apply the changes only after the verification but if something else break the database this can save your life.

How to deploy python script in python http.server

My actual requirement is to expose a python script as a web-service.
Using flask I did that. As Flask I snot a production-grade server. I used uWSGI to deploy that.
Most of the sites suggesting to deploy this with NGINX. Why my web-service didn't contain any static data.
I read somewhere that the queue size for uWSGI is 100. Means Ata point of time it can queue upto 100 requests?
My manager suggested deploying the flask script in http.server instead of NGINX. Can I deploy like this?
Is it possible to deploy a simple "HelloWorld" python script in http.server?
Can you please provide an example of how can I deploy a simple python script in a http.server.
If I want to deploy more such "HelloWorld" python scripts how can I do that?
Also can you point some links on http.server VS uWSGI.
Thanks, Vijay.
Most of the sites suggesting to deploy this with NGINX. Why my web-service didn't contain any static data.
You can configure nginx as a reverse proxy, between the Internet and your WSGI server, even if you don't need to serve static files from nginx. This is the recommended way to deploy.
My manager suggested deploying the flask script in http.server instead of NGINX. Can I deploy like this?
http.server is a simple server which is built into Python and comes with the same warning as Flask's development server: do not run in production.
You can't run a flask script with http.server. Flask's dev server does the same job as http.server.
Technically you could run either of these behind nginx but this is not advised as both http.server and Flask's dev server are low security implementations, intended for single user connections. Even with nginx in front, requests are ultimately handled by either server, which is why you need to launch the app with a WSGI server which can handle load properly.
I read somewhere that the queue size for uWSGI is 100. Means Ata point of time it can queue upto 100 requests?
This doesn't really make sense. For example gunicorn which is one of many WSGI servers, states the following about load:
Gunicorn should only need 4-12 worker processes to handle hundreds or thousands of requests per second.
So by specifying the number of workers when you launch gunicorn with something like:
gunicorn --bind '0.0.0.0:5000' --workers 4 app:app
... will increase the capability of the WSGI server (gunicorn in this case) to process requests. However leaving the --workers 4 part out, which will defualt to 1 is probably more than sufficient for your HelloWorld script.

Gunicorn is creating workers in every second

I am running Django using Gunicorn behind Nginx. In one of my installation, when I run the gunicorn process, I keep getting debug output, it's like workers are being created in every second (I assume this because django is loading very slow and note the message "[20205] [DEBUG] 3 workers"). You can check the detail output at this gist
In similar setup, I am running 3 more installations without any such issues and respective site loads almost instantly.
Any idea why this is happening? Thanks.
The polling of the workers every second on --log-level debug was introduced in gunicorn==19.2.
Change the log level to info.