Deploying first Django project on Amazon EC2 free tier - django

Finally the time has come and I'm ready to deploy my first django project.
I'm a newbie in web development stuff and now the real fun begins.
This is a low scale site for computer jobs.
I want to start with a free tier and grow from there as need emerges.
I've read some guides regarding django project deployment but could not find all answers.
so hope some guys here could help me out:
I've been thinking on getting Amazon EC2 free tier VPS, is this a good option?
my local development machine runs Ubuntu, I've read that i could install 10GB Ubuntu image, do you recommend such image?
should I go with apache or lighter web server?
My project is hosted on bitbucket, I just need to checkout my project on my VPS right?
What about data backups? I would like to backup my mySQL DB
How do you recommend me serving the static files?
I'm looking for a good tutorial on how to setup AWS with django and mysql
10x guys!

I've been thinking on getting Amazon EC2 free tier VPS, is this a good
option?
If it fufills your technology requirements, ram, cpu, memory; it is a good option.
my local development machine runs Ubuntu, I've read that i could
install 10GB Ubuntu image, do you recommend such image?
Might as well keep your environments the same if you can. If you can match up versions that is another plus
should I go with apache or lighter web server?
Either, Apache would probably be easier to deploy at this point because you don't have to worry about running it as a servicer ( using a program like like supervisor to manage it).
Whichever one you choose, there is an abundance of tutorials online describing how to set up django.
My project is hosted on bitbucket, I just need to checkout my project
on my VPS right?
That is one way. There are lots of ways to deploy. I like syncing the actual files using fabric. That way your production server doesn't need to know about your bitbucket account. Once again, there are so many tutorials online describing deploying django. Fabric is a great place to start.
What about data backups? I would like to backup my mySQL DB
There exists lots of tools for this. Plenty of premade tools and shell scripts. I have used automysqlbackup and it works great http://sourceforge.net/projects/automysqlbackup/
How do you recommend me serving the static files?
Make sure the webserver serves them. If you deploy through apache you can set up an alias to serve static files very easily. You can come up with a collectstatic deployment scheme to put your static on s3, but for a simple site apache would be just fine
I'm looking for a good tutorial on how to setup AWS with django and
mysql
Perhaps you can find a tutorial that covers this, most likely you might just find a tutorial :
how to setut aws with ubuntu
Installing django / mysql on ubuntu

Related

Google Cloud run, Django and sqlite

I'm developing small single user applications in Django. Currently I do so with Heroku, which works just fine. I would like to deploy the application on Google Cloud run to have in the future a bit more flexibility.
In order to keep the overhead as small as possible I was considering using Sqlite. To keep persistency all I would need is a persistent volume, which could be achived via Google Cloud storage that is mounted into the docker container via gcsfuse. But here is the issue. I can't find a small Image with python and gcsfuse. I'm not a docker pro. Just getting started...
Any help is appreciated.
Google itself provides a HowTo Deploy a (simple) Django app to Google Cloud Run:
https://codelabs.developers.google.com/codelabs/cloud-run-django/
I myself as a Google Cloud rookie followed it step by step and it worked without any problems. By the way, I found that via the episode of the podcast Django Chat "Advanced Deployment" with Katie McLaughlin.
The HowTo use a Cloud SQL instance instead of Sqlite, but this seems to be a good choice:
„Generally speaking SQLite is not a good database choice for
professional websites. So while it is fine to use SQLite locally while
prototyping an idea, it is rare to actually use SQLite as the database
on a production project.“
William S. Vincent. „Django for Professionals, Chapter 2
Letting this apart it shouldn't be so hard to skip the Cloud SQL step and keep SQLite.

How to work with a local development server and deploy to a production server in django?

I want to work locally on my django(1.7) project and regularly deploy updates to a production server. How would you do this? I have not found anything about this in the docs. I am confused about that because it seems like many people would want to do that and there should be some kind of standard solution to this. Or am I getting the whole workflow wrong?
I should note that I'm not expecting a step-by-step guide. I am just trying to understand the concept.
Assuming you already have your deployment server setup, and all you need to do is push code to your server, then you can just use git as a form of deployment.
Digital Ocean has a good tutorial at this link https://www.digitalocean.com/community/tutorials/how-to-set-up-automatic-deployment-with-git-with-a-vps
Push sources to a git repository from a dev machine.
pull sources on a production server. Restart uwsgi/whatever.
There is no standard way of doing this, so no, it cannot be included with Django or be thoroughly described in the docs.
If you're using a PaaS how you deploy depends on the PaaS. Ditto for a container like docker, you must follow the rules of that particular container.
If you're old-school and can ssh into a server you can rsync a snapshot of the code to the correct place after everything else is taken care of: database, ports, webserver setup etc. That's what I do, and I control stuff with bash scripts utilizing a makefile.
REMOETHOST=user#yourbox
REMOTEPATH=yourpath
REMOTE=$REMOTEHOST:$REMOTEPATH
make rsync REMOTE_URI=$REMOTE
ssh $REMOTEHOST make -C $REMOTEPATH deploy
My "deploy"-action is a monster but might be as easy as something that touches the wsgi-file used in order to reload the site. My medium complex ones cleans out stale files, run collectstatic and then reloads the site. The really complex ones creates a timestamped virtualenv, cloned database and remote code tree, a new server-setup that points to this, runs connection tests on the remote and if they succeed, switches the main site to point to the new versioned site, then emails me the version that is now in production, with the git hash and timestamp.
Lots of good solutions. Heroku has a good tutorial: https://devcenter.heroku.com/articles/getting-started-with-django
Check out a general guide for deploying to multiple PaaS providers here: http://www.paascheatsheet.com

Deploying WordPress on Elastic Beanstalk?

Suppose I create a site in Wordpress, which is running on Elastic Beanstalk. Now, on the running app I will create posts /pages, upload images, etc. That is, some data, videos, files and records in a database will be added to the running application.
3 questions:
If WordPress is running on Elastic Beanstalk with multiple Amazon EC2 instances actually running my WordPress install, then will those files propagate automatically to all running instances? And will this also happen, if a new EC2 instance is fired up - for example, to handle increased load?
From what I see in AWS console, I can deploy different versions of an app-- but as per scenario above, if I deploy a new version, wont I lose all the files uploaded directly into running app (i.e. files and database records)? How do I keep those and at the same time deploy a new version of my app?
The WordPress team keeps issuing upgrades. Can I directly upgrade my running WordPress install, through the web interface? Or do I have to first upgrade my local version of WordPress, and then upload the new version of the app to Beanstalk? If I have to upgrade my local version and then upload, then again I am back to point 1, i.e. changes made by users directly to the older version of running app. How do I preserve those changes?
I've been working on this as well, and have learned a couple of things that are relevant here -- your question about uploads in particular has been on my mind:
(1) The best way to handle uploads, it seems to me, is to either go the NFS/NAS route like you suggest, but one better than that is to use an Amazon S3 plugin for WordPress, so that any uploads automatically copy up to S3 and the URLs in your WordPress media library reflect the FQDN of your bucket and not your specific site. That way you could have one or ten WP nodes in your Beanstalk and media/images are independent of any one of those servers.
(2) You should absolutely be using RDS here. Few things are easier to work with and as stress-free as a Multi-AZ, reserved MySQL RDS instance. Either that or your own EC2 running MySQL that is independent of the Beanstalk, but why run that when RDS is so much easier?
(3) Yes you definitely have to commit changes to your Git repository or local file first (new plugins, changes to themes, WP upgrades) and then upload/install as a revision to the Beanstalk code. Otherwise, all the changes you make via the web interface to one node will never be in the new load for a new node -- in fact you'll have an upgraded database but an older set of code in the Beanstalk application, so it's likely to create errors of some kind or another.
I took an AWS architecture course, and their advice for EC2 and the Beanstalk is to start to think about server instances as very disposable -- so you should try to think about easy ways for your boxes to provision themselves in the bootstrapping process and to take over work for one another without any precious resources on just one box. So losing an instance should never be a big deal. (This is definitely not how we thought in the world of physical servers, where we got everything tweaked 'just so'.)
Good luck!
Well, I'm no expert, but since no one has answered, I'll give it my best shot.
You are absolutely right--kind of. While each EC2 instance does have some local storage, it is destroyed and reset with each new instance. Because of this, Amazon has things like Elastic Block Storage and S3 for persistent files. I don't know how one would configure WP to utilize this, but that will likely be the solution.
I think this problem is solved by my answer to #1. As for the database, all of your EC2 instances should be pulling from the same RDS location. Again, while you could have MySQL running on each EC2 instance, in the interest of persistence, having a separate database makes more sense.
You, again, have most everything right. Local development should always precede live deployment. Upgrading locally then pushing to the live servers will make sure all of your instances remain identical.
Truth-be-told, I am still in the process of learning all of this too, as I said, I'm not an expert. Hopefully someone else will come along and give a more informed answer. However, the key conceptual hurdle here is the idea of elastic scalability--and the salient point of this idea is the separation of elements between what is elastic/scalable/disposable and what is persistent.
Hopefully that helps.
I have deployed a small Wordpress site on EB, S3 and RDS. S3 holds all static data, such as media uploads. This works through a plugin. RDS holds the database. EB holds the latest deployed application. The application is deployed from my dev environment, with a build script. This way, I just have to press one button and I redeploy.
I wrote an article about it here: http://www.cortexcode.com/wordpress-to-aws-code-example/
While it was at first annoying to work with, the speed of AWS is nice and now it's easier than ever. It used to be that I had to upload a bunch of files over FTP, this is way more efficient. :-)
As an addition to all the great answers already:
1) I can highly recommend EFS but also S3 for media files, so they are served from high availability regions in combination with cloudfront. For Wordpress there is one plugin that really speeds up this ( not affiliated to them, just really like the plugin ). There is also an assets plugin, if you'd like to serve JS, CSS files from S3. For the EFS solution, take a look at the AWSlabs docs on git, and specifically this file on how they mount the uploads file.
In general, EBS is really great for Wordpress, but you'll need to think in a different mindset as compared to other hosting solutions ( shared hosting, managed hosting ).
OK I researched a lot on this particular issue, and this is what I learned--
(1) If a wordpress user uploads some files, then his files will be uploaded only to the virtual machine that is actually serving his request at that time. Eg if currently the wordpress site is cloud-deployed and is using 5 virtual machines, now when user makes request he is directed to one virtual machine-- the one with the lowest load at that point... His uploads are stored only on that server. Current Platform-as-a-service solutions (like Amazon Elastic Beanstalk and App Fog) do not have the ability to propagate the changes to all the running instances. Either that (propagate changes to all servers) or use a common storage by all running instances-- these are the only 2 solutions to this problem. (Eg of common storage would be all 5 running virtual machines using Network-Attached-Storage (NAS)... )
(2) With ref to platforms available currently like Amazon Elastic Beanstalk and App Fog, for example, even if user made changes directly to running app- these platforms rely on the local version of code (which the admin deployed initially to cloud)- and there is no way to update the local version of code (on admin's PC) with the changes made by a user to running app-- hence these changes viz, files are lost-- Similarly, changes in database by user to running app are also lost-- unless the admin is using exactly the same database for his local app (that he deployed to cloud)
(3) Any changes to running apps first have to be made to the local app on admin's PC and then pushed to cloud.
I am working on a Cloud PaaS that addresses all these concerns-- viz updates can be made to running apps, code changes made to running app are also updated in code repository accessible by user...The Proof of concept is ready, hopefully it will be as good as I hope it should be :) -- currently the only thing that is actually there is the website (anyacloudpanel.com) -- design work is going on :)
If there is some rule that I should not mention my website( Anya Cloud Panel) -- then I am sorry -- pls feel free to edit and remove my website URL from my answer :)
Thanks,
Arvind.
Deploying WordPress to AWS Elastic Beanstalk does require some change to the normal WordPress deployment as mentioned here a few times. To answer your questions, here is a great tutorial explaining stateless applications and how to deploy to Elastic Beanstalk:
Deploying WordPress to Amazon Web Services AWS EC2 and RDS via ElasticBeanstalk
Be careful if you use a theme from themeforest for example. Some of them are incompatible with wordpress S3 plugin. Then you're screwed, you can not deploy your wordpress on the cloud.

How Are Experienced Web Developers Deploying Django Into Production on EC2?

I have never actually worked for a company which is deploying a Django App (with a large user base), and am curious about what is the best way to do this.
Right now I am hosting a Django App on EC2. The code for the app is sitting in my github account. I have nginx serving static content, and behind it a single apache server running django + mod_wsgi.
I am trying to figure out what the best practice is for "continuous deployment". Right now, after I have added additional functionality I do the following on EC2:
1) git reset HEAD --hard
2) git pull
3) restart apache
4) restart nginx
I have custom logic in my settings.py file so that if I am running on EC2, debug gets set to False, and my databases switch from sqlite3 (development) to mysql (production).
This seems to be working for me now, but I am wondering what is wrong with this process and how could I improve it.
Thanks
I've worked with systems that use Fabric to deploy to multiple servers
I'm the former lead developer at The Texas Tribune, which is 100% Django. We deployed to EC2 using RightScale. I didn't personally write the deployment scripts, but it allowed us to get new instances into the rotation very, very quickly and scales on-demand. it's not cheap, but was worth every penny in my opinion.
I'd agree with John and say that Fabric is the tool to do this sort of thing comfortably. You probably don't want to configure git to automatically deploy with a post commit hook, but you might want to configure a fabric command to run your test suite locally, and then push to production if it passes.
Many people run separate dev and production settings files, rather than having custom logic in there to detect if it's in a production environment. You can inherit from a unified file, and then override the bits that are different between dev and production. Then you start the server using the production file, rather than relying on a single unified settings.py.
If you're just using apache to host the application, you might benefit from a lighter weight solution. Using fastcgi with nginx would allow you to do away with the overhead of apache entirely. There's also a wsgi module for nginx, but I don't know if it's production ready at this point.
There is one more good way how to manage this. For ubuntu/debian amis it is good to manager versions and do deployemnts by packeging your application into .deb

Develping with Django, Git, and Cloud Server

I'm currently working with a team in my University to put together a new webapp. Nothing too fancy, just run of the mill MySQL + Django. We are also hoping to use Git for source control. We were wondering what hosting options were available to us. We're all very competent with Unix, so a ssh connection would be preferable. We also looked into the Amazon Cloud, but are not sure if that's right for us. What does Stackoverflow suggest for a provider to host both a Git repo for us and our webapp. The simpler, the better. It should also run a Linux environment.
I have had great success using the Rackspace Cloud servers. You get root SSH into the server, so you can set up your Git repo and your web app there. They have a lot of options for which flavor of Linux you want to use as well.
I'm doing Django/Postgres on an Ubuntu server and haven't had any problems at all. As a bonus, it includes very easy web and API integration with their CDN if you're interested in that.
I looked into a variety of cloud providers and RS had the best options for me, although CDN integration was a big deal for my site so that factor weighed heavier than it might for you.
I use the cheapo 256MB RAM/10GB HD install and pay around ~$12/month after bandwidth costs are figured into it.
Here's the pricing: http://www.rackspace.com/cloud/cloud_hosting_products/servers/pricing/
Why not AWS? It has a free tier that is able to run basic Django apps well. You can run it using a Django AMI directly or a service like BitNami Cloud Hosting (Disclaimer: I am a BitNami developer, I am actually in charge of many of the Python-based stacks). Both options allow you to run a micro instance of an Amazon Machine for free (680Mb Ram, 10Gb disk).
On BitNami Cloud Hosting, we recently added support for Python and Django (Python 2.6.5 and Django 1.3) and we already included Git. When you select to create a new server you will have access to all those components on top of Ubuntu 10.04.
Also if you are interested in using Redmine (as dgel suggests) you can select to install it when you create your server in the same machine. Since it is an university project, you may also want to consider hosting the Git part on github.com for free.
I would highly recommend sourcerepo.com for git and redmine hosting. $6.95 per month for unlimited projects including redmine instances with git hooks. You don't need to worry about setting up or maintaining the git repos or redmine instances yourself.
Then for your project's public hosting you can't beat linode.com for $19.95 per month.