How could i use let's encrypt behind a django application without stopping the server? - django

I have a django application running on a server. I want to use let's encrypt to provide an encrypted connection. I could use the standalone option of their ACME client, but i don't want to stop my server, what i would have to do.
So there is the webroot option, that work with my allready running webserver (nginx). Django would process the request in this case. My question is, how should it look like on the django side to get this running (keeping automated renewal several months in mind)?

I don't know what setup others use, but I generally set up Django apps with Nginx serving static content and Gunicorn as the application server. It's widely accepted that Django apps usually use this kind of two web server setup. The standard instructions for setting up Let's Encrypt with Nginx worked fine for me.
Or Digital Ocean have an excellent guide too.
EDIT: It looks like Nginx can do a "graceful" reload that just updates the config with no downtime. For Debian or Ubuntu pre Systemd this would be sudo service nginx reload, while for a distro with Systemd the command is sudo systemctl reload nginx.service.

In case other users come this way like I did from Google, here's how I improved this situation:
I was unsatisfied by my options when it came to creating ACME challenges for Let's Encrypt when running a Django application. So, I rolled my own solution and created a Django app! Basically, you can manage your ACME challenges as just another object, and the app will produce the proper end-point URL.
Yes you are installing an app which means a deploy / update to your app, but once you've done that managing your challenges is far easier in the long run.
Simply pip install django-letsencrypt and follow the README to be on your way.

Related

How to set virtualenv to stay active on a host server

I created a website that uses vuejs as the frontend and django as the backend with another service running behind everything that im making api calls to.
So django is setup in a way to look at the dist folder of Vuejs and serve it if you run manage.py runserver. but the problem is that my service that I created is
is also in python and it needs to run in a virtualenv in order to work (It uses tensorflow 1.15.2 and this can only run in a contained environment)
I'm sitting here and wondering how I can deploy the django application and keep the virtualenv active and Im coming up with nothing, I've tried doing some research on this but everything I found was not relevant to my problem. I've deployed it and when I close the ssh connection the virtualenv stops.
If there is anyone that can enlighten my ways I would appreciate it.
i think you need to nginx: https://www.digitalocean.com/community/tutorials/how-to-set-up-django-with-postgres-nginx-and-gunicorn-on-ubuntu-16-04
if you are search for keep states just in terminal i suggest tmux https://github.com/tmux/tmux/wiki
You can use uWSGI and nginx to deploy Django apps on server. Here's helpful articles:
https://www.digitalocean.com/community/tutorials/how-to-set-up-uwsgi-and-nginx-to-serve-python-apps-on-centos-7
https://www.digitalocean.com/community/tutorials/how-to-set-up-uwsgi-and-nginx-to-serve-python-apps-on-centos-7
Django official docs also has a page about it: https://docs.djangoproject.com/en/3.1/howto/deployment/wsgi/uwsgi/
There are articles from developers so you can refer them in case you get stuck anywhere:
https://www.freecodecamp.org/news/django-uwsgi-nginx-postgresql-setup-on-aws-ec2-ubuntu16-04-with-python-3-6-6c58698ae9d3/
https://medium.com/#biswashirok/deploying-django-python-3-6-to-digital-ocean-with-uwsgi-nginx-ubuntu-18-04-3f8c2731ade1

How to configure Elastic IP with django app in aws?

I am building an app using django in EC2-ubuntu and i have associated Elastic ip with my instance.
i have done following steps :
1. first created instance of ubuntu in ec2 free tier.
2. installed python.
3. installed pip.
4. installed django.
5. create a django project using django-admin startproject.
6. run server using these commads python manage.py runserver 0.0.0.0:80
7. created an elastic ip and associated to the instance.
8. configure security inbound settings with http 0.0.0.0:80 address.
9. able to ping my project using any browser.
But the problem is when i am closing my putty session where i supplied runserver command, django project is also stopped. i did not stop it manually.
Please, help me to keep on running after closing putty session as well.
Thanks,
Kripa Sharma
Take a look at this Answer
I highly recommend that you start using Elastic Beanstalk (Python instance) to take care of all these steps for you. Very simple to setup, and no need to worry about any of the steps you listed.
You can use this instruction to see how you can deploy a Django app in less than 5 minutes.
The problem
You are trying to persist the debug server for a remotely deployed application.
You probably need to review the runserver command documentation. Here are the relevant parts:
django-admin runserver [addrport]
Starts a lightweight development Web server on the local machine. By default, the server runs on port 8000 on the IP address 127.0.0.1. You can pass in an IP address and port number explicitly.
...
DO NOT USE THIS SERVER IN A PRODUCTION SETTING. It has not gone through security audits or performance tests. (And that’s how it’s gonna stay. We’re in the business of making Web frameworks, not Web servers, so improving this server to be able to handle a production environment is outside the scope of Django.)
A webserver
Having skimmed the above docs, you may want to look at "How to deploy with WSGI" section, which gives a few recommendations for commonly used Web servers. My favorite, Gunicorn, includes a usage example:
$ pip install gunicorn
$ gunicorn myproject.wsgi
Having decided, and installed a webserver, you'd need to "daemonize" it and expose it to the world.
The former is usually done by creating a service on your OS, for ubuntu it would be either upstart or systemd depending on the version. Gunicorn docs have examples for both.
The latter is usually achieved with an http-server/proxy such as nginx or apache httpd. And again, Gunicorn has an example for us.
You can see why I like it so much ☺️
Epilogue
While technically possible to run the debug server as a service or even in a terminal multiplexer such as GNU screen or tmux, it's not a recommended or stable long term solution.
That said, these are very useful to know about, so read on the above tools and learn to use them, since they would be invaluable to have in your toolset in the future, for example to avoid accidentally terminating a long running command (such as migration), etc.

deploy bitnami django

I am quite computer-illiterate, but I have managed to utilize the Django framework on my own machine. I have had an account on Amazon Web Service (AWS) for some time, but it appeared rather complex to set-up and to make use of, so I put it of for a while. Then I decided to give it a try, and it was not so hard as I first thought to load a AMI and connect to the server with PuTTY. But since I were already using BitNami's Django-Stack, I decided to take a look at their hosting offer (which builds on AWS). Since they appeared to offer "one-click deployment", I set up a new server through their interface. But then, it seems like the "one-click deployment"-promise is with regard to the server itself. There does not seem to be any interface for deploying Django projects through their site. Having used PuTTY already, and adding WinSCP to my machine, I can acceess the server and load my Django-code unto the server. But then I am lost. The documentation seems a bit thin (look here).
The crux of this is the following: Can anyone make this part of the process more understandable. I.e., how to deploy a Django project on a Linux server with Apache/mod_WSGI?
The other question is: I want to use Postgres. Am I free to install this on the server. Should I opt for EBM (EMB?) for this, or what is the downside of not having EBM?
I hope I am not too unworthy of your attention, thanks!
how to deploy a Django project on a Linux server with Apache/mod_WSGI The Bitnami AMI already comes with all this configured. Once installed try going to the EC2 public url on the default 8000 port and you will see the demo django project setup there. You can add your own project once you have logged into the machine via putty check the /home/bitnami/ directory for the demo project. Copy your project, configure your database The other question is: I want to use Postgres. Am I free to install this on the server Postgres and Mysql are already installed the same way you would do on your local machine. The in your project do ./manage.py runserver 0.0.0.0:9000 since the 8000 port is already running another application.

What is the benefit of installing gunicorn for my django app on heroku?

I have recently switched to Django for a web app I'm developing and I followed the instructions at Heroku for getting a Django app running on Heroku. I have a virtual environment in which my app is developed and I use git for version control and to push to Heroku. The link above suggests that I intall gunicorn:
The examples above used the default HTTP server for Django. For
production apps, you may wish to use a more production-ready embedded
webserver, such as Tornado, gevent’s WSGI server, or Gunicorn.
They then walk the user through installing Gunicorn.
My question is: what problems might I run into if I skip this step and just stay with the default? What benefits will Gunicorn give me?
Gunicorn is production ready and really easy to use. I use it for my websites. You usually should run it via a reverse proxy like Nginx. I'm not sure what Heroku is using. You really should try it.
In my experience it's much easier to use and configure than apache & mod_wsgi, and the other similar setups.
edit/update:
As a summary of the comments below, Heroku already uses Nginx as a reverse proxy
Much better performance, and probably better security and stability, too. Django's development web server (which is used by Heroku by default) isn't really designed to serve production applications.
django's server, is a development server . It is light weigh and easy to use but should not be used in production because it is not production ready. it cannot handle many requests. This link offers a comparison between gunicorn, uwsgi and django's development server.

Different methods to deploy Django project and their pros and cons?

I am quite a noob when it comes to deploying a Django project. I'd like to know what are the various methods to deploy Django project and which one is the most preferred.
The Django documentation lists Apache/mod_wsgi, Apache/mod_python and FastCGI etc.
mod_python is deprecated now, one should use mod_wsgi instead.
Django with mod_wsgi is easy to setup, but:
you can only use one python version at a time [edit: you even can only use the python version mod_wsgi was compiled for]
[edit: seems if I'm wrong on mod_wsgi not supporting virtualenv: it does]
So for multiple sites (targeting different django/python versions) on a server mod_wsgi is not
the best solution.
FastCGI can be used with virtualenv, also with different python versions, as you run it with
./manage.py runfcgi …
and then configure your webserver to use this fcgi interface.
The new, hot stuff about django deployment seems to be gunicorn. It's a webserver that implements wsgi and is typically used as backend with a "big" webserver as proxy.
Deployment with gunicorn feels a lot like fcgi: you run a process doing the django processing stuff with manage.py, and a webserver as frontend to the world.
But gunicorn deployment has some advantages over fcgi:
speed - I didn't find the sources, but benchmarks say fcgi is not as fast as the f suggests
config files, for fcgi you must do all configuration on the commandline when executing the manage.py command. This comes unhandy when running multiple django instances via an init.d (unix-like OS' system service startup). It's always the same cmdline, with just different configuration files
gunicorn can drop privileges: no need to do this in your init.d script, and it's easy to switch to one user per django instance
gunicorn behaves more like a daemon: writing pidfile and logfile, forking to the background etc. makes again using it in an init.d script easier.
Thus, I would suggest to use the gunicorn solution, unless you have a single site on a single server with low traffic, than you could use the wsgi solution. But I think in the long run you're more happy with gunicorn.
If you have a django only webserver, I would suggest to use nginx as frontendproxy, as it's the best performing (again this is based on benchmarks I read in some blogposts - don't have the url anymore).
Personally I use apache as frontendproxy, as I need it for other sites hosted on the server.
A simple setup instruction for django deployment could be found here:
http://ericholscher.com/blog/2010/aug/16/lessons-learned-dash-easy-django-deployment/
My init.d script for gunicorn is located at github:
https://gist.github.com/753053
Unfortunately I did not yet blog about it, but an experienced sysadmin should be able to do the required setup.
Use the Nginx/Apache/mod-wsgi and you can't go wrong.
If you prefer a simple alternative, just use Apache.
There is a very good deployment document: http://lethain.com/entry/2009/feb/13/the-django-and-ubuntu-intrepid-almanac/
I myself have faced a lot of problems in deploying Django Projects and automating the deployment process. Apache and mod_wsgi were like curse for Django Deployment. There are several tools like Nginx, Gunicorn, SupervisorD and Fabric which are trending for Django deployment. At first I used/configured them individually without Deployment automation which took a lot of time(I had to maintain testing as well as production servers for my client and had to update them as soon as a new feature was tested and approved.) but then I stumbled upon django-fagungis, which totally automates my Django Deployment from cloning my project from bitbucket to deploying on my remote server (it uses Nginx, Gunicorn, SupervisorD, Fabtic and virtualenv and also installs all the dependencies on the fly), all with just three commands :) You can find more about it in my blog post here. Now I even don't have to get involved in this process(which used to take a lot of my time) and one of my junior developers runs those three commands of django-fagungis mentioned here on his local machine and we get a crisp new copy of our project deployed in minutes without any hassle:)