How to deploy python script in python http.server - flask

My actual requirement is to expose a python script as a web-service.
Using flask I did that. As Flask I snot a production-grade server. I used uWSGI to deploy that.
Most of the sites suggesting to deploy this with NGINX. Why my web-service didn't contain any static data.
I read somewhere that the queue size for uWSGI is 100. Means Ata point of time it can queue upto 100 requests?
My manager suggested deploying the flask script in http.server instead of NGINX. Can I deploy like this?
Is it possible to deploy a simple "HelloWorld" python script in http.server?
Can you please provide an example of how can I deploy a simple python script in a http.server.
If I want to deploy more such "HelloWorld" python scripts how can I do that?
Also can you point some links on http.server VS uWSGI.
Thanks, Vijay.

Most of the sites suggesting to deploy this with NGINX. Why my web-service didn't contain any static data.
You can configure nginx as a reverse proxy, between the Internet and your WSGI server, even if you don't need to serve static files from nginx. This is the recommended way to deploy.
My manager suggested deploying the flask script in http.server instead of NGINX. Can I deploy like this?
http.server is a simple server which is built into Python and comes with the same warning as Flask's development server: do not run in production.
You can't run a flask script with http.server. Flask's dev server does the same job as http.server.
Technically you could run either of these behind nginx but this is not advised as both http.server and Flask's dev server are low security implementations, intended for single user connections. Even with nginx in front, requests are ultimately handled by either server, which is why you need to launch the app with a WSGI server which can handle load properly.
I read somewhere that the queue size for uWSGI is 100. Means Ata point of time it can queue upto 100 requests?
This doesn't really make sense. For example gunicorn which is one of many WSGI servers, states the following about load:
Gunicorn should only need 4-12 worker processes to handle hundreds or thousands of requests per second.
So by specifying the number of workers when you launch gunicorn with something like:
gunicorn --bind '0.0.0.0:5000' --workers 4 app:app
... will increase the capability of the WSGI server (gunicorn in this case) to process requests. However leaving the --workers 4 part out, which will defualt to 1 is probably more than sufficient for your HelloWorld script.

Related

Collecting django static files for deployment

I developed a django application to handle requests from angular application. Now, should I collect static files for django application while deploying it?
python manage.py collectstatic
Will my django respond to the requests without staticfiles? Because, I found in many documentations that collectstatic is being performed in the steps to deploy django application.
It depends on how you are going to run your Django application. If you are putting it anywhere public, you should have Debug set to False in your settings.py and as such you will need to do the collectstatic step, but not necessary every time you make a change; only if you've added more static files.
The 'Serving the files' section of the documentation (https://docs.djangoproject.com/en/3.1/howto/static-files/) is clear on using runserver in production:
This method is grossly inefficient and probably insecure, so it is
unsuitable for production.
Determine what sort of volume your site is going to have to decide if you want something like nginx being your webserver and proxying requests to your Django app being run by Daphne, Gunicorn or uvicon while nginx (or other fav web service) serves up your static content. If it is too much for the one connection or the server it may make sense to host your static files elsewhere - it really all depends on your use case.

Running django-q using elastic beanstalk on aws linux 2 instances

I use Elastic Beanstalk on aws to host my webapp which needs a task runner like django q. I need to run it on my instance and am facing difficulty doing that. I found this script https://gist.github.com/codeSamuraii/0e11ce6d585b3290b15a9ad163b9aa06 which does what I need but its for the older version of ec2 instance. So far I know I must run django q post deployment, but is it possible to add the process to the procfile along with starting the wsgi server.
Any help that could point me in the right direction will be greatly appreciated.
You can create a "Procfile" at the root of your bundle with following content:
web: gunicorn --bind 127.0.0.1:8000 --workers=1 --threads=15 mysite.config.wsgi:application
qcluster: python3 manage.py qcluster
Obviously, replace "mysite.config.wsgi" with the path to your wsgi.
I ended up not finding a solution, i chose a different tech altogether to fulfill the requirements. It was a crontab making curl requests to a Django server. So on the Django admin I would create task routes linking it to modules in the file storage. And paste the route info in crontab setting and set the appropriate time interval.

How to deploy a django/gunicorn application as a systemd service?

Context
For the last few years, I have been only using docker to containerize and distribute django applications. For the first time, I am asked to set things so it could be deployed the old way. I am looking for a way to nearly achieve the levels of encapsulation and automation I am used to while avoiding any pitfalls
Current heuristic
So, the application (my_app):
is build as a .whl
comes with gunicorn
is published on a private Nexus and
can be installed on any of the client's server with a simple pip install
(it relies on Apache as reverse proxy / static files server and Postgresql but this is irrelevant for this discussion)
To make sure that the app will remain available, I thought about running it as a service managed by systemd. Based on this article, here is what /etc/systemd/system/my_app.service could look like
[Unit]
Description=My app service
After=network.target
StartLimitIntervalSec=0[Service]
Type=simple
Restart=always
RestartSec=1
User=centos
ExecStart=gunicorn my_app.wsgi:application --bind 8000
[Install]
WantedBy=multi-user.target
and, from what I understand, I could provide this service with environment variables to be read by os.environ.get() somewhere in /etc/systemd/system/my_app.service.d/.env
SECRET_KEY=somthing
SQL_ENGINE=somthing
SQL_DATABASE=somthing
SQL_USER=somthing
SQL_PASSWORD=somthing
SQL_HOST=somthing
SQL_PORT=somthing
DATABASE=somthing
Opened questions
First if all, are there any obvious pitfalls?
upgrade. Will restarting the service be enough to upgrade the application after a pip install --upgrade my_app?
library conflicts Is it possible to make sure that gunicorn my_app.wsgi:application will use the correct versions of libraries used by the project? Or do I need to use a virtualenv? If so, should source /path/to/the/venv/bin/activate be used in ExecStart or is there a cleaner way to make sure ExecStart runs in a separate virtual environment?
where should python manage.py migrate and python manage.py collectstatic live? Should I use those in /etc/systemd/system/my_app.service
Maybe you can try, but I'm also not sure if it works. This is for the third question.
You have to create a virtualenv, install gunicorn and django in it.
Create a file python like this "gunicorn_config.py"
Fill like this, or can be adjusted as needed:
#gunicorn_config.py
command = 'env/bin/gunicorn'
pythonpath = 'env/myproject'
bind = '127.0.0.1:8000'
in the systemd ExecStart fill env/bin/gunicorn -c env/gunicorn_config.py myproject.wsgi.

How does a django app start it's virtualenv?

I'm trying to understand how virtual environment gets invoked. The website I have been tasked to manage has a .venv directory. When I ssh into the site to work on it I understand I need to invoke it with source .venv/bin/activate. My question is: how does the web application invoke the virtual environment? How do I know it is using the .venv, not the global python?
More detail: It's a Drupal website with Django kind of glommed onto it. Apache it the main server. I believe the Django is served by gunicorn. The designer left town
_
Okay, I've found how, in my case, the virtualenv was being invoked for the django.
BASE_DIR/run/gunicorn script has:
#GUNICORN='/usr/bin/gunicorn'
GUNICORN=".venv/bin/gunicorn"
GUNICORN_CONF="$BASE_DIR/run/gconf.py"
.....
$GUNICORN --config $GUNICORN_CONF --daemon --pid $PIDFILE $MODULE
So this takes us into the .venv where the gunicorn script starts with:
#!/media/disk/www/aavso_apps/.venv/bin/python
Voila
Just use absolute path when calling python in virtualenv.
For example your virtualenv is located in /var/webapps/yoursite/env
So you must call it /var/webapps/yoursite/env/bin/python
If you use just Django behind a reverse proxy, Django will use whatever is the python environment for the user that started the server was determined in which python command. If you're using a management tool like Gunicorn, you can specify which environment to use in those configs. Although Gunicorn itself requires us to activate the virtual environment before invoking Gunicorn
EDIT:
Since you're using Gunicorn, take a look at this, https://www.digitalocean.com/community/tutorials/how-to-deploy-python-wsgi-apps-using-gunicorn-http-server-behind-nginx

running django worker and daphne in docker container

I have django application that run in docker container. Recently i figured out that i'm going to need to add websockets interface to my application. I'm using channels with daphne behind nginx and redis as a cache. The problem is that i have to run django workers and daphne in 1 container.
Script that is running on container startup:
#!/usr/bin/env bash
python wait_for_postgres.py
python manage.py makemigrations
python manage.py migrate
python manage.py collectstatic --no-input
python manage.py runworker --only-channels=http.* --only-channels=websocket.* -v2
daphne team_up.asgi:channel_layer --port 8000 -b 0.0.0.0
But it hangs on running a worker. I tried nohup but it seems to not work. If i run daphne directly from container with docker exec everything works just fine.
This is an old question, but I figured I will answer it anyway, because I recently faced the same issue and thought I can shed some light on this.
How Django channels work
Django Channels is another layer on top of Django and it has two process types:
One that accepts HTTP/Websockets
One that runs Django views, Websocket handlers, background tasks, etc
Basically, when a request comes in, it first hits the interface server (Daphne), which accepts the HTTP/Websocket connection and puts it on the Redis queue. The worker (consumer) then sees it, takes it off the queue and runs the view logic (e.g. Django views, WS handlers, etc).
Why it didn't work for you
Because you only run the worker (consumer) and it's blocking the execution of the interface server (producer). Meaning, that no connections will be accepted and worker is just staring at an empty redis queue.
How I made it work
I run Daphne, redis and workers as separate containers for easy scaling. DB migrations, static file collection, etc are executed only in Daphne container. This container will only have one instance running to ensure that there are no parallel db migrations running.
Workers on the other hand can be scaled up and down to deal with the incoming traffic.
How you could make it work
Split your setup into at least two containers. I wouldn't recommend running everything in one container (using Supervisor for example). Why? Because when the time comes to scale the setup there's no easy way to do it. You could scale your container to two instances, but that just creates another supervisor with daphne, redis, django in it... if you split the worker from daphne, you could easily scale the worker container to deal with growing incoming requests.
One container could run:
#!/usr/bin/env bash
python wait_for_postgres.py
python manage.py migrate
python manage.py collectstatic --no-input
daphne team_up.asgi:channel_layer --port 8000 -b 0.0.0.0
while the other one:
#!/usr/bin/env bash
python wait_for_postgres.py
python manage.py runworker --only-channels=http.* --only-channels=websocket.* -v2
The 'makemigrations' command
There is no need to run the command in the script you provided, if anything it could block the whole thing because of some question it is awaiting input for (e.g. "Did you rename column X to Y?").
Instead, you can execute it in a running container like this:
docker exec -it <container_name> python manage.py makemigrations