Move to 2 Django physical servers (front and backend) from a single production server? - django

I currently have a growing Django production server that has all of the front end and backend services running on it. I could keep growing that server larger and larger, but instead I want to try and leave that main server as my backend server and create multiple front end servers that would run apache/nginx and remotely connect to the main production backend server.
I'm using slicehost now, so I don't think I can benefit from having the multiple servers run on an intranet. How do I do this?

The first step in scaling your server is usually to separate the database server. I'm assuming this is all you meant by "backend services", unless you give us any more details.
All this needs is a change to your settings file. Change DATABASE_HOST from localhost to the new IP of your database server.
If your site is heavy on static content, creating a separate media server could help. You may even look into a CDN.

The first step usually is to separate the server running actual Python code and the database server. Any background jobs that does processing would probably run on the database server. I assume that when you say front end server, you actually mean a server running Python code.
Now, as every request will have to do a number of database queries, latency between the webserver and the database server is very important. I don't know if Slicehost has some feature to allow you to create two virtual machines that are "close" in terms of network latency(a quick google search did not find anything). They seem like nice guys, so maybe you could ask them if they have such a service or could make an exception.
Anyway, when you do have two machines on Slicehost, you could check the latency between them by simply pinging between them. When you have the result you will probably know if this is at all feasible or not.
Further steps depends on your application. If it is media heavy, then maybe using a separate media server would make sense. Otherwise the normal step is to add more web servers.
--
As a side note, I personally think it makes more sense to invest in real dedicated servers with dedicated network equipment for this kind of setup. This of course depends on what budget you are on.
I would also suggest looking into Amazon EC2 where you can provision servers that are magically close to each other.

Related

Suddenly scheduled tasks are not running in coldfusion 8

I am using Coldfusion MX8 server and one of the scheduled task was running from 2 years but now suddenly from 01/12/2014 scheduled tasks are not running. When i browsed the file in browser then the file is running successfully without error.
I am not sure is there any updatation or license expiration problem. I am aware that mid of this year Adobe closed the support for coldfusion 8.
The first most common problem of this problem is external to the server. When you say you browsed to the file and it worked in a browser, it is very important to know if that test was performed on the server desktop. Knowing that you can browse to the file from your desktop or laptop is of small value.
The most common source of issues like this is a change in the DNS or network stack that is interfereing with resolution. For example, if the internal DNS serving your DMZ suddenly starts serving the "external" address - suddenly your server can't browse to your domain. Or if the IP served by the server for the domain in question goes from being 127.0.0.1 to some other IP that the server can't acces correctly due to reverse proxy or LB or some other rule. Finally, sometimes the Apache or IIS is altered so that an IP that previously was serviced (127.0.0.1 being the most common example) now does not respond.
If it is something intrinsic to the scheduler service then Frank's advice is pretty good - especially look for "proxy schduler" entries in the log - they can give you good clues. I would also log results of a scheduled task to a file. Then check the file. If it exists then your scheduled tasks ARE running - they are just not succeeding. Good luck!
I've seen the cf scheduling service crash in CF8. The rest of CF is unaffected.
Have you tried restarting the server?
Here are your concerns:
Your File (works since you tested it manually).
Your Scheduled Task (failed).
Your Coldfusion Application (Service) (any changes here)?
Your Server (what about here).
To test your problem create a duplicate task and schedule it. Leave the other one in place (maybe set your new one to run earlier). Use the same file too. See if it completes.
If it doesn't then you have a larger problem. Since the Coldfusion Server sits atop of the JVM there could be something happening there. Things just don't stop working unless something got corrupted or you got compromised. If you hardened your server by rearranging/renaming the file structure to make it more secure...It would break your task.
So going back: if your test schedule works then determine what is different between the two. Note you have logging capabilities. Logging abilities for CF8
If you are not directly incharge of maintaining this server, then I would recommend asking around and see if there was recent maintenance, if so, what was done to the server?

Handling Uploads on a Django Site Behind a Load Balancer

I have a small- to medium-sized Django project where the client has been forced to change hosts. The new host convinced them they definitely needed a couple of web servers behind a load balancer (and to break the database off to a third server). I have everything ported over to the new setup, but I can't make it live yet as I'm not sure what's the best way to handle file uploads on the site as they will only get pushed up to the server the user is currently connected to. Given the three servers (counting the db which could double as a static file server if I had to), what's the cleanest and easiest way to handle this situation?
Simple solution which has some latency though and is not scalable beyond several servers - use rsync between hosts. Simply add it to cron to do upload dir sync both ways, also sticky session would help here - so that uploader would see their file as available immediately, and other visitors will be able to get the file after the next rsync completes.
This way you also get a free backup.
/usr/bin/rsync -url --size-only -e "ssh -i servers_ssh.key" user#server2:/dir /dir
(you'll have to have this in cron on both servers)

Django-celery on multiple computers

I got everything I wanted to do with django-celery working on my development machine. More specifically, the app accepts photo urls which are then turned into tasks that the same machine downloads.
Now what I want to do is put the django code on heroku and the celery tasks on a dedicated computer that will be kept in the office.
I don't know what the next step is though. How do I tell the django app to connect to the office computer? What is the process for setting up the office computer to accepts tasks from the django app? How do I give the local computer login credentials to the django app so that it can connect to the database to update the models?
Ideally, I am looking to put something like this in my setting.py file:
remote_worker = '123.2.4.23:1234'
and on the office computer
tasks = 'photos/tasks.py'
remote_app = 'herokuapp123.com/myapp'
username = 'me'
password = 'pw'
I know there are a lot of questions. Any help or pointers would be appreciated!
This largely depends on what AMQP backend you are using for celery. If you are using the default (RabbitMQ) you will need do one of the following:
Install RabbitMQ on heroku server, expose its port to your business IP through firewall and configure your office computer to connect to it
Install RabbitMQ locally on your business computer and configure celery on Heroku to connect to it
Install RabbitMQ on both sides and bridge them.
Alternatively you can integrate the heroku server in your own business network using a VPN solution and have them directly talk to each other (because after all you probably don't want to transmit AMQP packets bare over the interwebz).
Scenario 1 is probably the easiest to set up as Heroku already provides you the plugin infrastructure to do so. Scenario 2 is probably not what you want as you will have to punch a hole in your business firewall for that. Both scenarios 1 and 2 will have latency and reliability issues as routing AMQP traffic over the internet is not going to be expedient or reliable. You will have dropped messages and celery will keep retrying until it succeeds or reaches the max number of failures. However AMQP was designed to handle network issues, they just may inadvertently affect your performance if that is critical. But then again in that case you should reconsider putting the celery workers on a business desktop.
Scenario C is probably best in terms of reliability but also more difficult to set up. Choose based on your needs.

Django web server -- where should I draw the line between production and development?

I know that it's bad to use the Django web server in production. There's been at least one Stackoveflow question on this already.
But I'm wondering about where to draw the line between development and production? If I'm only allowing HTTP access to one (or a few) IP addresses, then I know I'm in development. What if I open it to all IP addresses, but only e-mail a couple friends to see what they think of what I've built?
As far as I can tell, the problems with using the Django server are:
It's single-threaded
Security
I don't think (1) is likely to be an issue if I'm only sharing it with a few people. For (2)--what's the worst-case scenario? Does it make a difference that I'm running on an Amazon EC2 server that I could very easily restart from a backup if something bad happened?
Well, the answer is very simple actually, you've left development when you have something you must protect: real user personal information, real data in your database that you'd be afraid to lose, etc.
Security isn't a concern until these things are present. The rule about not using the dev server in "production" is guidance, not mandatory. You can fire up the dev server in your production environment any time you want. However, you'd be silly to do so and then open up universal access to it, once your site is truly live and in use by the world.
Setting up mod_wsgi (or some other WSGI container) on a development machine takes all of 5 minutes, and can help you sort out deployment issues before you actually reach deployment. So really, why ever use the development server if you don't have to?

Cache data on Multiple Hosts in AppFabric

Let me first explain that I am very new when it comes to use AppFabric for improving the Responsiveness of your application. I am trying to configure the Server Cluster with 2 Nodes over XML provider over Network Shared location.
My requirement is that the cached data should be created on both the Hosts so that If One of the host is down my other host in the Cluster should be able to serve the request and provide the cached data. As I said I have 2 Host in my Cluster and one of them is defined as Lead Host. Now when I am saving the data in cache I could not see the data in both the hosts (Not sure is there any specific command where you can see the data in a specific host). So what I want to test is that I’ll stop one of the Cache host and try to see if still I able to get the data from the second cache host.
thanks in advance
-Nitin
What you're talking about here is High Availability. To enable this, you'll need to be running Windows Server Enterprise Edition - if you're on Standard Edition then you just can't do it. You also really need a minimum of three hosts, so that if one goes down there are still two copies of your cached data to provide failover. If you can meet these requirements then the only extra step to create a highly-available cache is to set the Secondaries flag when you call new-cache e.g.
new-cache myHACache -Secondaries 1
There's no programmatic way to query what data is held on a specific host, because you only ever address the logical cache, not an individual physical host.
From our experience, using SQL authentication to the database does not work. Its clearly stated that only Integrated Security option is supported. Also we faced issues with the service running with "Integrated Security" since our SQL cluster was running under a domain account and AppFabric needs to run under "Network service" and we couldnt successfully connect to the sql cluster from AppFabric service.
This was a painful experience for us and I hope AppFabric caching improves the way it sends out "error messages and error codes". And also allows us to decide how we want to connect to the sql. KInd of stupid having to undergo this pain of "has to run as Network Service" and "no SQL authentication".