Django with Heroku-PostgreSQL database horizontal scaling - django

The problem background
I am working on a Django project, which is using PostgreSQL and is hosted on Heroku(also using heroku-postgres). After some time, the amount of data becomes very big and that slows down the application.
I tried replication in order to read from multiple databases, that helped reducing the queue and connection time, but the big table is still the problem.
I can split the data based on group of users, different group do not need to interact with each other.
I have read into Sharding. But since we use heroku-postgres, it's hard to customize the sharding
So I have come up with 2 different ideas below
1. The app cluster with multi-db (Not allowed to embed image yet)
Please see the design here
We can use the middlewares and database-routers to achieve this
But not sure if this is friendly with django
2. The gateway app with multiple sub-apps (Not allowed to embed image yet)
Please see the design here
This require less effort than the previous design
Also possible to set region-based sub-apps in the future
My question is: which of the two is more django-friendly and better for scalability in the long run?

Related

Use of redis cluster vs standalone redis

I have a question about when it makes sense to use a Redis cluster versus standalone Redis.
Suppose one has a real-time gaming application that will allow multiple instances of the game and wish to implement
real time leaderboard for each instance. (Games are created by communities of users).
Suppose at any time we have say 100 simultaneous matches running.
Based on the use cases outlined here :
https://d0.awsstatic.com/whitepapers/performance-at-scale-with-amazon-elasticache.pdf
https://redislabs.com/solutions/use-cases/leaderboards/
https://aws.amazon.com/blogs/database/building-a-real-time-gaming-leaderboard-with-amazon-elasticache-for-redis/
We can implement each leaderboard using a Sorted Set dataset in memory.
Now I would like to implement some sort of persistence where leaderboard state is saved at the end of each
game as a snapshot. Thus each of these independent Sorted Sets are saved as a snapshot file.
I have a question about design choices:
Would a redis cluster make sense for this scenario ? Or would it make more sense to have standalone redis instances and create a new database for each game ?
As far as I know there is only a single database 0 for a single redis cluster.(https://redis.io/topics/cluster-spec)
In that case, how would one be able to snapshot datasets for each leaderboard at different times work ?
https://redis.io/topics/cluster-spec
From what I can see using a Redis cluster only makes sense for large-scale monolithic applications and may not be the best approach for the scenario described above. Is that the case ?
Or if one goes with AWS Elasticache for Redis Cluster mode can I configure snapshotting for individual datasets ?
You are correct, clustering is a way of scaling out to handle really high request loads and store tons of data.
It really doesn't quite sound like you need to bother with a cluster.
I'd quite be very surprised if a standalone Redis setup would be your bottleneck before having several tens of thousands of simultaneous players.
If you are unsure, you can probably mock some simulated load and see what it can handle. My guess is that you are better off focusing on other complexities of your game until you start reaching quite serious usage. Which is a good problem to have. :)
You might however want to consider having one or two replica instances, which is a different thing.
Secondly, regardless of cluster or not, why do you want to use snap-shots (SAVE or BGSAVE) to persist your scoreboard?
If you want to have individual snapshots per game, and its only a few keys per game, why don't you just have your application read and persist those keys when needed to a traditional db? You can for example use MULTI, DUMP and RESTORE to achieve something that is very similar to snapshotting, but on the specific keys you want.
It doesn't sound like multiple databases is warranted for this.
Multiple databases on clustered Redis is only supported in the Enterprise version, so not on ElastiCache. But the above mentioned approach should work just fine.

Is there a way to compute this amount of data and still serve a responsive website?

Currently I am developing a django + react website, that will (I hope) serve a decent number of users. The project demo is mostly complete, and I am starting to think about the scale required to put this thing into production
The website essentially does three things:
Grab data from external APIs (i.e. Twitter) for 50,000 unique keywords (the keywords dont change). This process happens every 30 minutes
Run computation on all of the data, and save that computation to the database. Assume that the algorithm is as optimized as possible
When a user visits the website it should serve a pretty graph/chart of all of the computational data per keyword
The issue being, this is far too intense a task to be done by the same application that serves the website, users would be waiting decades to see their data. My current plan is to have a separate API made that services the website with the data, that the website can then store in it's database. This separate API would process the data without fear of affecting users, and it should be able to finish its current computation in under 30 minutes, in time for the next round of data.
Can anyone help me understand how I can better equip my project to handle the scale? I'd love some ideas.
As a 4th year CS Student I figured it's time to put a real project out into the world and I am very excited about it and the progress I've made so far. My main worry is that the end users will be negatively effected, if I don't figure out some kind of pipeline to make this process happen.
To re-iterate my idea:
Django + React - This is the forward facing website
External API - Grabs the data off the internet and processes it, and waits for a GET request from the website
Is there a better way to do this? Or on the other hand am I severely overestimating how computationally heavy this is.
Edit: Including current research
Handling computationally intensive tasks in a Django webapp
Separation of business logic and data access in django
What you want is to have the computation task to be executed by a different process in the "background".
The most straight-forward and popular solution is to use Celery, see here.
The Celery worker(s) - which performs the background task - can either run on the same machine as the web-application or (when scale becomes an issue), you can change the configuration so that it will run on an entirely different machine.

Django project-apps: What's your approach about implementing a real database scheme?

I've read articles and posts about what a project and an app is for Django, and basically end up using the typical example of Pool and Users, however a real program generally use a complex relational database, therefore its design gravitates around this RDB; and the eternal conflict raises once again about: which ones to consider an application and which one to consider components of that application?
Let's take as an example this RDB (courtesy of Visual Paradigm):
I could consider the whole set as an application or to consider every entity as an application, the outlook looks gray. The only thing I'm sure is about this:
$ django-admin startproject movie_rental
So I wish to learn from the expertise of all of you: What approach (not necessarily those mentioned before) would you use to create applications based on this RDB for a Django project?
Thanks in advance.
PS1: MORE DETAILS RELATED ABOUT MY REQUEST
When programming something I follow this steps:
Understand the context what you are going to program about,
Identify the main actors and objects in this context,
If needed, make an UML diagram,
Design a solid-relational-database diagram, (solid=constraints, triggers, procedures, etc.)
Create the relational database,
Start coding... suffer and enjoy
When I learn something new I hope they follow these same steps to understand where they want to go with their actions.
When reading articles and posts (and viewing videos), almost all of them omit the steps 1 to 5 (because they choose simple demo apps), and when programming they take the easy route, and don't show other situations or the many supposed features that Django offers (reusability, pluggability, etc).
When doing this request, I wish to know what criteria is used for experienced programmers in Django to determine what applications to create based on this sample RDB diagram.
With the (2) answers obtained so far, "application" for...
brandonris1 is about features/services
Jeff Hui is about implementing entities of a DB
James Bennett is about every action on a object, he likes doing a lot of apps
Conclusion so far: Django application is a personal creed.
My initial request was about creating applications, but as models are mentioned, I have this another question: is with a legacy relational database (as showed in the picture) possible to create a Django project with multiple apps? this is because in every Django demo project showed, every app created has a model with their own tables, giving the impression that tables do not interact with those of other applications.
I hope my request is more clear. Thanks again for your help.
It seems you are trying to decide between building a single monolithic application vs microservices. Both approaches have their pros and cons.
For example, a single monolithic application is a good solution if you have a small amount of support resources and do not need to be able to develop new features in fast sprints across the different areas of the application (i.e. Film Management Features vs Staff Management Features)
One major downside to large monolithic applications is that eventually their feature sets grow too large and with each new feature, you have a significant amount of regression testing which will need to be done to ensure there aren't any negative repercussions in other areas of the application.
Your other option is to go with a microservice strategy. In this case, you would divide these entities amongst a series of smaller services and provide them each methods to integrate/communicate with each other (APIs).
Example:
- Film Service
- Customer Service
- Staff Service
The benefits of this approach is it allows you to separate capabilities and features by specific service areas thus reducing risk and regression testing across the application when new features are deployed or there is a catastrophic issue (i.e. DB goes down).
The downside to this approach is that under true microservice architecture, all resources are separated therefore you need to have unique resources (ie Databases, servers) for each service thus increasing your operating cost.
Either of these options is a good option but is totally dependent on your support model and expected volumes. Hope this helps.
ADDITIONAL DETAIL:
After reading through your additional details, since this DB already exists and my assumption is that you cannot migrate it, you still have the same choice as to whether or not you follow a monolithic application or a microservices architecture.
For both approaches, you would need to connect your django webapp the the specific DB you are already using. I can't speak for every connector out there but I know that the MySQL connector allows django to read from the pre-existing db to systematically generate the models.py file for the application. As a part of that connector, there is a model variable which allows you to define whether or not Django is responsible for actually managing the DB tables themselves.
The only thing this changes from an architecture perspective is how many times do you want to code this connection?
If you only want to do it once and completely comply with the DRY method, you can build a monolithic application knowing that as new features become required, application wide regression testing will be an absolute requirement.
If you want ultimate flexibility for future changes with this collection of features and don't mind recoding the migration across multiple apps while reducing the need for application wide regression testing as new features become required, a microservice architecture strategy is more appropriate.

REST API or "direct" database access for remote Celery/Django workers?

I'm working on a project that will have multiple celery workers on machines in different locations in the US that will communicate over the internet.
Am I better off distributing my Django project to each machine and configuring them with the database credentials to my database host, or should I have a "main" Django/database host that presents a REST API for remote celery tasks and workers to hit for the database access?
Mostly looking for pros/cons and any factors I haven't thought of.
I can provide single simple API endpoints that provide all the data my tasks need to query and simple POST API endpoints that can create all the database entries my tasks need to create.
I'll have no more than say 10 remote workers that would maybe altogether being doing 1 request per minute.
I'm thinking this probably means my concerns are not so much about the request/response overhead but more about maintainability, architecture, security...
The answer depends on too many variables, concerns, forces and whatnots to be anything else than "it depends...".
I assume you already thought of the following pros and cons, but anyway:
Using an API will make for longer request/response cycles (obviously) and put quite some load on the Django project (front server, app server etc). Also it means that your tasks won't be able to use all of the database features (complex queries, aggregations, whatever).
OTHO adding an API layer will isolate the workers from the inner db schema, which can make migrations (on the Django side) and deployment easier as you won't have to stop all the workers, deploy to everyone and restart the workers. Well it even makes it possible to change the API side technology without impacting the workers (not that I see much reason to do so but anyway...). But it also mean you have a whole API to maintain, and chances are that model changes - or at last part of them - will impact your API and/or your tasks code anyway (if the changes are about adding features that the workers should use etc).
IOW, it really depends (yeah, I already said so, didn't I ?) on your project's needs and constraints, and only you/your team know which solution will best match your project.

Sitecore performance enhancements

We need our Sitecore web application to process 60-80 web requests per second. We are using Sitecore 7.0. We have tried a 1 Webserver + 1 Database server deployment, but it only processes 20-25 requests per second. Web server queues up all the other requests in the Memory. As we increase the load, memory fills up.(We did all Sitecore performance enhancements recommended). We need 4X performance to reach the goal :).
Will it be possible to achieve this goal by upgrading the existing server, or do we have to add more web servers in production environment.
Note: We are using Lucene indexing as well.
Here are some things you can consider without changing overall architecture of your deployment
CDN to offload media and static asset requests
This leaves your content delivery server available to handle important content queries and display logic.
Example www.cloudflare.com
Configure and use Sitecore's built-in caching
This is from the guide:
Investigation and configuration of the Sitecore Caches is broken down
into multiple tasks. This way each task is more focused and
simplified. The focus is on configuration and tuning of the Sitecore
Database Caches (prefetch, data, and item caches.)
For configuration
of the output rendering caching properties, the customer should be
made aware of both the Sitecore Cache Configuration Reference and the
Sitecore Presentation Component Reference as to how properly enable
and the properties to expire these caches.
Check out the Sitecore Tuning Guide
Find Slow Queries or Controls
It sounds like your application follows Sitecore best practices, but I leave this note in for anyone that might find this answer. Use Sitecore's built-in Debug mode to identify the slowest running controls and sublayouts. Additionally, if you have Analytics set up there is a "Slow Pages" report that might give you some information on where your application is slowing down.
Those things being said, if you're prepared to provision additional servers and set up a load-balanced environment then read on.
Separate Content Delivery and Content Management
To me the first logical step before load-balancing content delivery servers is to separate the content management from the equation. This is pretty easy and the Scaling Guide walks you through getting the HistoryEngine set up to keep those Lucene indexes up to date.
Set up Load Balancer with 2 or more Content Delivery servers
Once you've done the first step this can be as easy as cloning your content delivery server and adding it to your load balancer "pool". There are a couple of things to consider here like: Does your web application allow users to log in? So you'll need to worry about sticky sessions or machine keys. Does your web application use file media instead of blob media? I haven't had to deal with this, but I understand that's another consideration.
Scale your SQL solution
I've seen applications with up to four load balanced content delivery servers and the SQL Server did not have a problem - I think this will be unique to each case depending on a lot of factors: horsepower and tuning of SQL Server, content model of your application, complexity of your queries, caching configuration on content delivery servers, etc. Again, the Scaling Guide covers SQL Mirroring and Failover, so that is going to be your first stop on getting that going.
Finally, I would say contact Sitecore. These guys have probably seen more of what's gone right and what's gone wrong with installations and could get you on the right path. Good luck!
This answer written from a Sitecore developer perspective:
Bottom line: You need to figure out exactly where your performance bottleneck is. That is going to take some digging, but will be very worthwhile. You should definitely be able to serve 60-80 requests/s without any trouble... but of course that makes a lot of assumptions about the nature of your site and the requests.
For my site, I found Sitecore's caching implementation to be sub-par... I created some very simple and aggressive application-specific caches in my app and this made all the difference in the world. For instance, we have 900+ "Partner" items where our sites' advertisements live... and simply putting all these objects in an array in the Application object sped up page requests significantly. Finding an object in a Hashtable indexed by its Item.Name or ID is going to be a lot faster than Sitecore.Context.Database.GetItem("/itempath") or a SelectItems() call (at least, that's my experience). If your architecture and data set will allow this strategy, we've had good experience with it.
Another thing to watch out for is XSLT renderings. Personally, I avoid them completely in favor of ASP.NET UserControls. The XSLT rendering is just slow. As much as 10x slower than a native UserControl rendering the same HTML. So if you have a few of these... replace with some custom code and you'll see a world of difference.