I'm creating a Django project that will be used by multiple domains, and the functionality will be slightly different depending on the domain. I'm looking for advice on the proper way to set this up.
The
sites framework seems like it would be a good fit for doing some of the customizations once processing has reached the point where it's executing the Django code. But I'm trying to determine what the setup should be before we reach that point (relating to the nginx, flup, fastcgi, config).
Here is my current understanding:
It seems like multiple Django settings files are appropriate, each with a different SITE_ID. Then two virtual hosts would be setup in the nginx configuration that would point to two different sockets. Two 'manage.py runfastcgi' processes would then be used to listen on those two different sockets and each process would reference a different settings.py
./manage.py --settings=settings.site1.py runfcgi method=prefork socket=/home/user/mysite1.sock pidfile=django1.pid
./manage.py --settings=settings.site2.py runfcgi method=prefork socket=/home/user/mysite2.sock pidfile=django2.pid
However, it seems like this could get messy if you add more domains. It would require a new 'manage runfastcgi' process to be run for every domain that would be added. Is there a way to support multiple sites in this way without running a separate process for each?
What are your experiences with hosting multiple domains with Django?
Any advice is much appreciated. Thank you for reading.
Joe
If you are going to have a lot of domains running, one process per domain might get quite expensive. The sites framework was originally made with another use case in mind: being able to easily create "duplicate" content on several news sites. When trying to use the sites framework for other uses you run into several difficulties.
one possibility is to move the domain processing to a middleware and have django handle the multidomain part. It's not trivial though, specially if you have to tweak apps to be domain aware, and also urlconfs, templates, etc... A quick google search showed up:
http://djangosnippets.org/snippets/1119/
Might help as a starting point.
Related
Is it possible to have django running on the server and one application from django inter communicating with another python process say that I developed and fetching a response from it or even make it just do a particular action?
It can be synchronous or asynchronous; I have some idea of being asynchronous where some package like hendrix, crossbar.io or even celery can be used. But I don't understand what would be the name for this inter-communication and how should I plan the architecture for this.
Going around my head I have the two following situations I'm seeking a plan to be developed:
1.
Say I have django and an e-mail sender with the python package smtp. A user making a request to a view would make django execute my python module I developed for sending an email to a particular user (with a smpt server from google/gmail). It could be synchronous or asynchronous.
OR
2
I have django (some application) and I want it to communicate with some server I maintain; say for making this server execute some code or just fetch a file (if it is an ftp server). Is this an appropriate situation to point to the term 'microservices'? Or there is another term or workaround here?
Your first solution would be called an installable python module, just like any package you install with pip. You can have this as a separate module if you need your code to be re-usable across multiple or just future projects.
Your second solution would be a microservice. This will require setting your small module as a service that could have a REST API to communicate with and make it do whatever you intend it to do.
If your question is "what is the right approach" then I would tell you it depends on your use case. If this is just some re-usable code that you don't want to repeat over and over through our project then just make it into a separate module. While if this is a service that you expect other built services will use and rely on, then just make it into a microservice. You can use a microframework such as Flask for easier and faster setup of your service. Otherwise, if it's just some code that you will use once and serves a single functionality on your application then just write it and keep it there.
There are no rules or standards on which approach should be taken. I personally judge things depending on the use-case.
Hope this helps!
In a couple of months, I'm receiving a single (physical) Ubuntu LTS server for the purpose of a corporate Intranet only web tools site. I've been trying to figure out a framework to go with. My preference at this point would be to go with Django. I've used RoR, CF and PHP heavily in the past.
My Django concern right now is how to have both a separate '/web/' and '/dev/' environment, when I'm only getting a single server. Of course this would include also needing separate 'web' and 'dev' databases (either separated by db name or having two different db instances running on the single server).
Option 1: I know I could only setup a 'web' (production) environment on Ubuntu and then use my corporate Windows laptop to develop Django tools. I've read this works fine except that a lot of 3rd party Django packages don't work on Windows. My other concern would be making code changes and then pushing to the Ubuntu server where I might introduce problems that didn't show up on the local Windows development environment.
Option 2: Somehow setup a separate Django 'web' and 'dev' environment on the same server. I've seen a lot of different and confusing information on this. Also adding to the complication is what I assume would be the need to have two database instances running on the same server. Or, how could you have two different Django environments for 'web' and 'dev' and have them point to different db tables based on name instead of needing two different db instances running?
Thanks for any advice. I'm actually having trouble relaxing and learning Django not knowing how bad this is going to deal with. I could easily just deal with the pain of developing in basic PHP if this is too over complicated. With plain PHP it's dead simple to have a '/web/' and and '/dev/' path and separate db's just by checking the URL or file path for '/web/' or '/dev/' (and then pointing to the right db for example - 'mytool_dev_v1' / 'mytool_web_v1').
There are multiple ways to solve this problem:
You can run 2 separate instances of django in the same server in different virtual environments. You can configure them in a multiple ways: using environment variables or just separate 'production' and 'dev' config-files and choose which gonna be used.
You can use docker containers to serve different django instances. It is the best way I think. You can configure them in the same way: by the environment variables or multiple config files for 'dev' and 'prod' options.
If you want to serve 2 (or more) sites in the same server youll probably need to configure nginx server to redirect requests to the separate containers or django instances depends on the domain name or something else (url, for example).
As I know there is no problem to configure separate database for each instance. You also can run your postgres or mysql instance in container. The same way you can run nginx.
I can't recommend you to develop your app in the same server where production app is running. I convinced that development must going in the developer's computer, but yeah... Windows is not the best for django development, but it mostly works. Otherwise I can recommend you to use dualboot or at least VirtualBox with Ubuntu.
There are many things that are different in deployment and production. For example, in case of using Facebook API, I need to change id of application(because there are different id for testing and production) every time I push update to the app.
I update only app, so what do usually django developers do in this case? Possibly saving a variable to settings.py and then getting it from there or creating separated file in virtual environment folder, which in my case at least is also separated ?
There is no official way of splitting your Django settings for prod and dev -- developers are encouraged to find a way that works for them. The Django docs list out several good options here: https://code.djangoproject.com/wiki/SplitSettings
I am about to start a django project, where I need a base deployment, lets say just for admins initially. Later admins can add instances of my main public site.
Now, one instance will, obviously be separated by dynamic sub-domains. I need to capture sub-domains from requests, and compute accordingly. It has its own base templates, static files, etc (easiest part). It would have set of feature apps (common for all instances, but not the data in their models). And I am thinking of using Django1.2's multiple database support, and try to get one db per instance (* adding dynamically :( , if that is feasible, It will include dynamic db/model creations*). Or I can go for adding an instance foreign_key in all feature apps models, to separate them instant-wise.
If my instances were known prior to deployment, I would have used multiple database support easily by capturing the sub-domains and diverting my ORM calls to concerned db. But, that is not the case. Those has to be dynamic (added as need arises).
Now before I give it a try, to get solution/rid of delusions about it, I would want experts of SO to think about it. I would appreciate the suggestions, insights and of-course criticism.
I can make it community wiki, if suggested. Thanks guys.
Shouldn't you just run a separate Django instance in a each their Apache VirtuaHost? Then, you can have a Django settings file for each instance and they can each point at their proper database. This also simplifies your code because you don't need to map subdomain names to databases inside your views. As a real bonus, your code gets re-usable because it doesn't depend on your complex setup.
Ususally you can do instances of your app with the sites framework.
I have five nodes behind a load balancer and I'm trying to determine the optimal configuration for a Django based site.
Each node has access to Postgres, mod_wsgi, Apache, Lighttpd, memcached, pgpool2 (for database replication) and glusterfs(for media file replication) and is running Ubuntu 8.04LTS.
So far, the setup is four nodes running Apache/Lighttpd/memcached/pgpool2 all reading/writing to one master node that is running the "master" Postgresql. Each of the four web nodes is also running Postgres for replication from the master via pgpool.
So, my question is: How would you configure this setup and/or what would you change so that there is no single point of failure, if possible?
This sounds like a good setup, although its hard to know exactly what your setup looks like. In terms of memory etc. and what traffic you expect to handle.
You might want to consider using Django's multidb support and have a read only postgres instance (use DB routing to direct reads to the read only for certain apps). This can offer up some quite nice speed improvements - and at the moment you could have a potential bottleneck at the single postgres instance depending how heavy your database work is.
As #ashwoods suggested, it might be working looking into gunicorn and nginx. I guess at the moment you use Apache only to run mod_wsgi? And lighttpd for the static files? With nginx, you can use it with a number of wsgi servers and its great for static files too.
The setup looks pretty good to me. I would consider using gunicorn/uwsgi + nginx. I would also benchmark using pbbouncer, although pgpool2 offers more out of the box.