I have a django application that uses an SQLite database. I use git to push the changes to my EC2 instance, which runs the website on an Elastic IP. The site itself works, but when I try to log in to the admin interface I get one of two errors from django:
attempt to write a readonly database
or
unable to open database file
It seems that chmod u+rw leads to the first error and a+rw leads to the second error, but I'm unsure of what is happening. The testserver on my local machine works as expected.
I realize that SQLite may be a bad choice for production, but the site will not have much traffic and I will be the only one using the admin interface or writing to the database. If someone has a solution for setting up MySQL or Postgres and somehow synchronizing the database contents, I would accept that too.
I found the solution after much research. I only defined one user in my EC2 server, but apparently Apache needs access as the user www-data.
sudo chown www-data /projectdir
sudo chown www-data /projectdir/sqlite.db
Is your application running in multiple threads?
The problem is that sqlite3 databases can not be shared between multiple threads of your Django application. I am not sure this is a general sqlite3 bug, a Django bug or just intended. Maybe other users figured out a way to deal with it. I didn't and use either PostgreSQL or MySQL on production servers.
If your site is really low traffic you may just set your webservers config to run your app single threaded (and within a single process). This will have a similiar behavior (and limits) as running the Django test server locally.
Related
I am running django server, using gunicorn. Apart from gunicorn, I have a layer of nginx as a load balancer and using supervisord to manage gunicorn.
From the perspective of security is it fine to run my gunicorn server with sudo permission? Is there any potential security leak?
Also, does it makes any difference if I am a superuser and not running process with sudo permission as in any case I have sudo permissions as the user.
Does it need to run as root?
If it doesn't, don't run it as root.
Even better, add a separate user for the app and run it as that user.
I believe the answer to question "is it ok to run xxx with root permissions" should not be "If it doesn't, don't run it as root." but rather a clear "NO".
Every single server and framework is designed to be run without root rights.
What can go wrong? In case you have a vulnerability allowing to remotely execute code on the server you would be simply giving root rights to whoever can exploit it. In case one of your developers in team does something stupid like deleting the root directory, it will be deleted. You don't want that a single app running on the server disrupts your whole system, do you?
It is not a good practice to run any external network facing application with root user privilege.
Consider a scenario where your uploaded file is not validated or sanitized ( file upload vulnerability). If someone uploads some vulnerable file and executes it. Consider that file to have implemented reverse shell. Then it gets easier to take down your server.
I am building an app using django in EC2-ubuntu and i have associated Elastic ip with my instance.
i have done following steps :
1. first created instance of ubuntu in ec2 free tier.
2. installed python.
3. installed pip.
4. installed django.
5. create a django project using django-admin startproject.
6. run server using these commads python manage.py runserver 0.0.0.0:80
7. created an elastic ip and associated to the instance.
8. configure security inbound settings with http 0.0.0.0:80 address.
9. able to ping my project using any browser.
But the problem is when i am closing my putty session where i supplied runserver command, django project is also stopped. i did not stop it manually.
Please, help me to keep on running after closing putty session as well.
Thanks,
Kripa Sharma
Take a look at this Answer
I highly recommend that you start using Elastic Beanstalk (Python instance) to take care of all these steps for you. Very simple to setup, and no need to worry about any of the steps you listed.
You can use this instruction to see how you can deploy a Django app in less than 5 minutes.
The problem
You are trying to persist the debug server for a remotely deployed application.
You probably need to review the runserver command documentation. Here are the relevant parts:
django-admin runserver [addrport]
Starts a lightweight development Web server on the local machine. By default, the server runs on port 8000 on the IP address 127.0.0.1. You can pass in an IP address and port number explicitly.
...
DO NOT USE THIS SERVER IN A PRODUCTION SETTING. It has not gone through security audits or performance tests. (And that’s how it’s gonna stay. We’re in the business of making Web frameworks, not Web servers, so improving this server to be able to handle a production environment is outside the scope of Django.)
A webserver
Having skimmed the above docs, you may want to look at "How to deploy with WSGI" section, which gives a few recommendations for commonly used Web servers. My favorite, Gunicorn, includes a usage example:
$ pip install gunicorn
$ gunicorn myproject.wsgi
Having decided, and installed a webserver, you'd need to "daemonize" it and expose it to the world.
The former is usually done by creating a service on your OS, for ubuntu it would be either upstart or systemd depending on the version. Gunicorn docs have examples for both.
The latter is usually achieved with an http-server/proxy such as nginx or apache httpd. And again, Gunicorn has an example for us.
You can see why I like it so much ☺️
Epilogue
While technically possible to run the debug server as a service or even in a terminal multiplexer such as GNU screen or tmux, it's not a recommended or stable long term solution.
That said, these are very useful to know about, so read on the above tools and learn to use them, since they would be invaluable to have in your toolset in the future, for example to avoid accidentally terminating a long running command (such as migration), etc.
I have a django application running on a server. I want to use let's encrypt to provide an encrypted connection. I could use the standalone option of their ACME client, but i don't want to stop my server, what i would have to do.
So there is the webroot option, that work with my allready running webserver (nginx). Django would process the request in this case. My question is, how should it look like on the django side to get this running (keeping automated renewal several months in mind)?
I don't know what setup others use, but I generally set up Django apps with Nginx serving static content and Gunicorn as the application server. It's widely accepted that Django apps usually use this kind of two web server setup. The standard instructions for setting up Let's Encrypt with Nginx worked fine for me.
Or Digital Ocean have an excellent guide too.
EDIT: It looks like Nginx can do a "graceful" reload that just updates the config with no downtime. For Debian or Ubuntu pre Systemd this would be sudo service nginx reload, while for a distro with Systemd the command is sudo systemctl reload nginx.service.
In case other users come this way like I did from Google, here's how I improved this situation:
I was unsatisfied by my options when it came to creating ACME challenges for Let's Encrypt when running a Django application. So, I rolled my own solution and created a Django app! Basically, you can manage your ACME challenges as just another object, and the app will produce the proper end-point URL.
Yes you are installing an app which means a deploy / update to your app, but once you've done that managing your challenges is far easier in the long run.
Simply pip install django-letsencrypt and follow the README to be on your way.
I am quite computer-illiterate, but I have managed to utilize the Django framework on my own machine. I have had an account on Amazon Web Service (AWS) for some time, but it appeared rather complex to set-up and to make use of, so I put it of for a while. Then I decided to give it a try, and it was not so hard as I first thought to load a AMI and connect to the server with PuTTY. But since I were already using BitNami's Django-Stack, I decided to take a look at their hosting offer (which builds on AWS). Since they appeared to offer "one-click deployment", I set up a new server through their interface. But then, it seems like the "one-click deployment"-promise is with regard to the server itself. There does not seem to be any interface for deploying Django projects through their site. Having used PuTTY already, and adding WinSCP to my machine, I can acceess the server and load my Django-code unto the server. But then I am lost. The documentation seems a bit thin (look here).
The crux of this is the following: Can anyone make this part of the process more understandable. I.e., how to deploy a Django project on a Linux server with Apache/mod_WSGI?
The other question is: I want to use Postgres. Am I free to install this on the server. Should I opt for EBM (EMB?) for this, or what is the downside of not having EBM?
I hope I am not too unworthy of your attention, thanks!
how to deploy a Django project on a Linux server with Apache/mod_WSGI The Bitnami AMI already comes with all this configured. Once installed try going to the EC2 public url on the default 8000 port and you will see the demo django project setup there. You can add your own project once you have logged into the machine via putty check the /home/bitnami/ directory for the demo project. Copy your project, configure your database The other question is: I want to use Postgres. Am I free to install this on the server Postgres and Mysql are already installed the same way you would do on your local machine. The in your project do ./manage.py runserver 0.0.0.0:9000 since the 8000 port is already running another application.
I have a new website built on Django and Python 2.6 which I've deployed to the cloud (buzzword compliant AND the Amazon micro EC2 instance is free!).
Here are my detailed notes: https://docs.google.com/document/d/1qcZ_SqxNcFlGKNyp-CXqcFxXXsKs26Avv3mytXGCedA/edit?hl=en_US
As this is a new site (and wanting to play with the latest and greatest) I used Nginx and Gunicorn on top of Supervisor.
All software installed from trunk using YUM / easy_install.
My database is Sqlite (for now - not sure of where to go next, but that is not the question). Also on the todo list: virtualenv + pip.
So far so good.
My code in in SVN. I wrote a simple fabfile to deploy - checks out the latest code and restarts Gunicorn via Supervisor. I hooked my DNS name to an Elastic IP.
It works.
My question is, how do I update the site without a disruption of service? Users of the site get 404s / 500s when I run my little update script.
Is there a way to do this without adding another server (price is key)?
I would love to have a staging system (on a different port?) and a seamless switch between Staging and Production. On the same (free) server. Via Fabric.
How do I do that? Is it the same Nginx running both sites? Can I upgrade Staging without hurting Production? What would the fabfile look like? What would the directory tree look like?
Thanks!
Tal.
Related:
Seamless deployment in Rails
-
Nginx allows you to setup failover for your reverse proxies you can put one gunicorn instance as the primary and as long as that version is running it will never look at the failover.
If you configure your site so that your new version is in the failover instance you just need to write your fab file to update the failure instance with the new version of the site and then when ready, turn off primary instance. Nginx will seamlessly failover to second instance and bam you are running on new version with no downtime.
You can then update the primary version and then turn it back on and your primary is now live. At this point you can keep the failover instance running just in case, or turn it off.
Some things to consider. You have to be careful with databases, if you are using sqllite make sure both gunicorn instances can accesss the sqllite file.
If you have a normal database this is less of a problem, you just need to make sure you apply any database migrations that the new version needs before you switch to it.
If they are backwards compatible changes then it isn't a big deal. If they aren't backwards compatible then be careful, you could break the old version of the site before you switch over to new version.
To make things easier I would run the versions on different virtual environments.
If you use supervisord to control gunicorn, then you can use the supervisorctl commands to reload/restart which ever instance you want to deploy without affecting the other one.
Hope that helps
Here is an example of and nginx config (not a full config file, removed the unimportant parts)
This assumes the primary gunicorn instance is running on port 9005 and the other is running on port 9006
upstream service-backend {
server localhost:9005; # primary
server localhost:9006 backup; # only used when primary is down
}
server {
listen 80;
root /opt/htdocs;
server_name localhost;
access_log /var/logs/nginx/access.log;
error_log /var/logs/nginx/error.log;
location / {
proxy_pass http://service-backend;
}
}
Sounds like you need to figure out how to tell gunicorn to gracefully restart. It seems like all you have to do is issue a HUP to the gunicorn process when to notify to reload the app. As describe in the link about, the gunicorn docs explain how to do it.