how to kill gunicorn server inside request response cycle - django

i have one Django application and one flask application.
it's running with two Gunicorn servers inside the docker containers.
my goal is if the database connection was failed (if db was down), i want to kill the Django app and flask app.
how could i do this ?

Related

How to divide Daphne requests on mutiple processors?

I use a Daphne server for my ASGI Django application. When I run htop on my Ubuntu server it shows all the load on only one processor and the app gets slow, so I'd like to know what is the easiest way to speed up my server. App is inside a Docker container.
I run Daphne with command: daphne WBT.asgi:application --bind "0.0.0.0" --port "8000".
I use Nginx as a proxy.
I tried using uvicorn server with multiple workers but then I get problems with layout. Ie. let's say that I have 4 workers when I visit my web app, it loads it on only one worker correctly, so in 25% cases I get no layout.

Dockerizing multi-node python django backend served by nginx

I am trying to dockerize a project that consists of multiple backend nodes that are served by a nginx server. Would I rather have an instance of nginx running in each docker container serving just that node or would I have a "central" NGINX server that serves all nodes.
In case of having a central nginx server, how would it communicate with the backend nodes.
The backend is implemented in python django. If each container has its own nginx instance, the communication between the web server and the python django application is straightforward. But how would I serve a python django application that lives in one container from a nginx server that lives in another?
In case of having an instance of nginx in each container, how would that impact the overall performace of the system?

Django celery tasks in separate server

We have two servers, Server A and Server B. Server A is dedicated for running django web app. Due to large number of data we decided to run the celery tasks in server B. Server A and B uses a common database. Tasks are initiated after post save in models from Server A,webapp. How to implement this idea using rabbitmq in my django project
You have 2 servers, 1 project and 2 settings(1 per server).
server A (web server + rabbit)
server B (only celery for workers)
Then you set up the broker url in both settings. Something like this:
BROKER_URL = 'amqp://user:password#IP_SERVER_A:5672//' matching server A to IP of server A in server B settings.
For now, any task must be sent to rabbit in server A to virtual server /.
In server B, you must just initialize celery worker, something like this:
python manage.py celery worker -Q queue_name -l info
and thats it.
Explanation: django sends messages to rabbit to queue a task, then celery workers request some message to execute a task.
Note: Is not required that rabbitMQ have to be installed in server A, you can install rabbit in server C and reference it in the BROKER_URL in both settings(A and B) like this: BROKER_URL='amqp://user:password#IP_SERVER_C:5672//'.
Sorry for my English.
greetings.

Stop nginx in django completely

I have used nginx as web server for django project , And now i want to use my normal django local server (switch to older local server).
I tried
sudo service nginx stop
(It is showing nginx is stopped)
i killed all process too.
But still my localserver:127.0.0.1:8000 is under nginx control.And my computer ip is also under nginx control.(it shows the nginx default page)
I want free that particular port(localserver:127.0.0.1:8000). How can i completely stop nginx ?

What is the purpose of NGINX and Gunicorn running in parallel?

A lot of Django app deployments over Amazon's EC2 use HTTP servers NGINX and Gunicorn.
I was wondering what they actually do and why both are used in parallel. What is the purpose of running them both in parallel?
They aren't used in parallel. NGINX is a reverse proxy. It's first in line. It accepts incoming connections and decides where they should go next. It also (usually) serves static media such as CSS, JS and images. It can also do other things such as encryption via SSL, caching etc.
Gunicorn is the next layer and is an application server. NGINX sees that the incoming connection is for www.domain.com and knows (via configuration files) that it should pass that connection onto Gunicorn. Gunicorn is a WSGI server which is basically a:
simple and universal interface between web servers and web applications or frameworks
Gunicorn's job is to manage and run the Django instance(s) (similar to using django-admin runserver during development)
The contrast to this setup is to use Apache with the mod_wsgi module. In this situation, the application server is actually a part of Apache, running as a module.
Nginx and Gunicorn are not used in parrallel.
Gunicorn, is a Web Server Gateway Interface (WSGI) server
implementation that is commonly used to run Python web applications.
NGINX is a free, open-source, high-performance HTTP server and reverse proxy, as well as an IMAP/POP3 proxy server.
Nginx responsible for serving static content, gzip compression, ssl,
proxy_buffers and other HTTP stuff.While gunicorn is a Python HTTP server that interfaces with both nginx and your actual python web-app code to serve dynamic content.
The following diagrams shows how nginx and Gunicorn interact.