django Mongo-db automatic failover from Primary(master) to Secondary(slave) - django

I've developed a web-app in django, and have used MongoDB for backend.
I'm not sure how to do an automatic failover for the database.
My requirement is that, suppose when Primary node of mongodb is down, django should automatically connect to Secondary node.
How can this be achieved?
I found this library, https://github.com/brianjaystanley/django-failover
which is for django 1.3, but i want for django 1.5
What settings do i need to change, or any library available for the rescue? Any solutions on the floor?
Thanks

You should not need to set up anything in your you application to handle this and the link you provided for the library is not appropriate for use with MongoDB as it is a relational back end solution.
The first case here is do you actually have a Replica Set Configuration for MongoDB? I can only answer presuming that you but the link is worthwhile reading as from your question you probably do not have a core understanding of MongoDB Replication concepts.
What will be explained there is that there is no Secondary for your application to failover to, what actually happens is the Replica Set itself elects amongst it's members which node will become the Primary.
Going on with the answer, you configure your application to handle the failover through settting up your Connection String to the driver. Read through that documentation and you will find that among other useful things, you are basically providing a list of hostnames which will be members of the Replica Set. You don't need all the members, but just enough to be a seed list so that the other nodes can be discovered. That would just happen anyway with the correct options, but it is good practice to have more than one host to contact even to get that information. Here's a sample:
mongodb://<Primary>,<Secondary>/<database>
You may possibly want to take a look at MongoEngine, considering you probably have experience with django and it uses modelling concepts that you will be familiar with, whilst still allowing access to MongoDB features. There is some documentation there on setting up Replica Set connections from memory.

Related

Django + PostgreSQL with bi-directional replication

Firstly please let me introduce my use-case: I am working on Django application (GraphQL API using Graphene), which runs in the cloud but also have its local instances in local customer's networks.
For example One application in the cloud and 3 instances (local Django app instance with a PostgreSQL server with enabled BDR) on local networks. If there is a network connection we are using bi-directional replication to have fresh data because if there is no connectivity we use local instances. Here is the simplified infrastructure diagram for an illustration.
So, if I want to use the BDR I can't do DELETE and UPDATE operations in ORM. I have to generate UUIDs for my entities and every change is just a new record with updated data for the same UUID. Latest record for selected UUID is my valid record. Removal is just a another flag. Till now, everything seems to be fine, problem starts when I want to use for example many-to-many relationship. Relationship relies on the database primary keys and I have to handle removal somehow. Can you please find the best way how to solve this issue? I have few ideas but I do not want to made a bad decision:
I can try to override ManyToManyField to work with my UUIDs and special removal flag. It's looks like nice idea because everything should work as before (Graphene will find the relations etc.). But I am afraid of "invisible" consequences.
Create my own models to simulate ManyToMany relationship. It's much more work but it should work just fine.
Did you have to solve similar issue before? Is there some kind of good practice or it's just building a highway to hell (AC/DC is pretty cool)?
Or if you think there is a better way how to build the service architecture, I would love to hear your ideas.
Thanks in advance.

Django Moving lookup table to Redis

I have a django app with redis which is currently used as the broker for Celery, and nothing beyond that.
I would like to utilize it further for lookup caching.
Let's say I had a widely used table in my database that I keep hitting for lookups. For the same of example, let's say it's a mapping of U.S. zip codes to city/state names, or any lookup that may actually change over time that's important to my application.
My questions are:
Once the server starts (in my case, Gunicorn), how do I one-time load the data from the database table to Redis. I mean- where and how do I make this one time call? Is there a place in the django framework for such "onload" calls? or do I simply trigger it lazy-style, upon the first request which will be served from the database, but trigger a Redis load of the entire table?
What about updates? If the database table is updated somehow, (e.g. row deleted, row updated, row added) how do I catch that in order to update the Redis representation of it?
Is there a best-practice or library already geared toward exactly that?
how do I one-time load
For the one time load you can find answer here (from those answers only urls.py worked for me). But I prefer another scenario. I would create manage command and I would add this script to the command you start your Gunicorn. For example if you're using systemd you could add this to service service config. You can also combine those, like add command and call it from urls.py
What about updates
It really depends on your database. For example if you use postgresql, you can create trigger for update/insert/delete and external table as redis. Also django has signal mechanism so you can implement that in django as well. You can also write your custom wrapper. In this wrapper you implement you operations + syncing with redis. And you would call wrapper instead of. But I prefer the first scenario.
Is there a best-practice or library already geared toward exactly
that?
Sorry I can't help you with this one.

Can I use bigchainDB server with django instead of using sqlite?

I am creating degree verification process using blockchain approach which contain six main entities. By entities I mean to say consensus mechanism will evolve around these six entities, so for this I need to build a distributed database. Two approaches came into my mind
One approach of achieving this is to completely built everything from scratch: Separate database for each node in sqlite and then connect each node with some type of query.
Another approach is to use bigchainDB server which is a distributed database server based on blockchain.
Now my question which approach is feasible? I don't know whether bigchainDB server is compatible with django or not since they haven't mention anything about it in their docs.
If anyone have use bigchainDB please help me out. I am really confused as to which approach should I follow.

Is it possible to use Django and Node.Js?

I have a django backend set up for user-logins and user-management, along with my entire set of templates which are used by visitors to the site to display html files. However, I am trying to add real-time functionality to my site and I found a perfect library within Node.Js that allows two users to type in a text box and have the text appear on both their screens. Is it possible to merge the two backends?
It's absolutely possible (and sometimes extremely useful) to run multiple back-ends for different purposes. However it opens up a few cans of worms, depending on what kind of rigour your system is expected to have, who's in your team, etc:
State. You'll want session state to be shared between different app servers. The easiest way to do this is to store external session state in a framework-agnostic way. I'd suggest JSON objects in a key/value store and you'll probably benefit from JSON schema.
Domains/routing. You'll need your login cookie to be available to both app servers, which means either a single domain routed by Apache/Nginx or separate subdomains routed via DNS. I'd suggest separate subdomains for the following reason
Websockets. I may be out of date, but to my knowledge neither Apache nor Nginx support proxying of websockets, which means if you want to use that you'll sacrifice the flexibility of using an http server as a app proxy and instead expose Node directly via a subdomain.
Non-specified requirements. Things like monitoring, logging, error notification, build systems, testing, continuous integration/deployment, documentation, etc. all need to be extended to support a new type of component
Skills. You'll have to pay in time or money for the skill-sets required to manage a more complex application architecture
So, my advice would be to think very carefully about whether you need this. There can be a lot of time and thought involved.
Update: There are actually companies springing around who specialise in adding real-time to existing sites. I'm not going to name any names, but if you look for 'real-time' on the add-on marketplace for hosting platforms (e.g. Heroku) then you'll find them.
Update 2: Nginx now has support for Websockets
You can't merge them. You can send messages from Django to Node.Js through some queue system like Reddis.
If you really want to use two backends, you could use a database that is supported by both backends.
Though I would not recommended it.

Best approach(es) or technolog(y/ies) for this specific problem?

I have a web-based interface for handing invoices, customer records and other transaction records which interacts currently with a database of all the aforementioned stored upon the same machine. As you can imagine, this is quite a simple set-up consisting of a web-app (PHP) and a database (MySQL). However, the ideal scenario is to keep the records on the machine they are currently on (easy) and move the web-app to another server within the same network (again, easy) ... but in addition, provide facilities on a public-facing website for managing accounts by customers and so forth. The problem is this - the public-facing web server is located in a completely separate location as it is a dedicated server provided by a well-known ISP.
What would be the best way to enable the records to be accessible from this other server whilst ensuring that all communications are secure. Speed is not a huge factor, although any outages on either side should be handled gracefully. Initially my thoughts went towards web services (XML-RPC/SOAP/Hessian), but these options seem to present difficulties (security being the main one, overcomplexity as well).
The web-app must remain PHP-based. The public-facing site is likely to be PHP-based as well, although Python (likely using Django) is another option. The introduction of any other technologies (Java etc) is not a problem, although it is preferred if they be Linux-friendly (so .NET would not be the best fit here).
Apologies if this question is somewhat verbose and vague. I am testing the water somewhat in regards to this kind of problem. Any advice or suggestions gratefully received.
I've done something similar. You can expose a web service to the internet that will do the database access, but requests to the service must match a strong hashed and salted password (which will be secured on the ISP's server in the DMZ.)
Either this or some sort of public/private key encryption scheme.
OK, this might seem a bit silly, but what if you just used mysql replication?
Instead of using all sorts of fancy web services, just have a master sql server on one machine, then have it replicate to another server that holds the slave sql server as well as the web app