Handling multiple users concurrently populating a PostgreSQL database - django

I'm currently trying to build a web app that would allow many users to query an external API (I cannot retrieve all the data served by this API at regular intervals to populate my PostgreSQL database for various reasons). I've read several thing about ACID and MVCC but still, I'm not sure there won't be any problem if several users are populating/reading my PostgreSQL database at the very same time. So here I'm asking for advice (I'm very new to this field)!
Let's say my users query the external API to retrieve articles. They make their search via a form, the back end gets it, queries the api, populates the database, then query the database to return some data to the front end.
Would it be okay to simply create a unique table to store the articles returned by the API when users are querying it ?
Shall I rather store the articles returned by the API and associate each of them to the user that requested it (the Article model will contain a foreign key mapping to a User model)?
Or shall I give each user a table (data isolation would be good but that sounds very inefficient)?
Thanks for your help !

Would it be okay to simply create a unique table to store the articles returned by the API when users are querying it ?
Yes. If the articles have unique keys (doi?) you could use INSERT...ON CONFLICT DO NOTHING to handle the (presumably very rare) case that an article is requested by two people nearly simultaneously.
Shall I rather store the articles returned by the API and associate each of them to the user that requested it (the Article model will contain a foreign key mapping to a User model)?
Do you want to? Is there a reason to? Do you care who requested each article? It sounds like you anticipating storing only the first person to request each article, and not every request?
Or shall I give each user a table (data isolation would be good but that sounds very inefficient)?
Right, you would be hitting the API a lot more often (assuming some large fraction of articles are requested more than once) and storing a lot of duplicates. It might not even solve the problem, if one person hits "submit" twice in a row, or has multiple tabs open, or writes a bot to hit your service in parallel.

Related

django change the default query set based on the requesting user

I have a system with multiple organizations logging in and interacting with us and our partners. I have a table that keeps track of what users have access to what organizations. I would like for customers to only see their own records.
I am doing this inside the views and whatnot. However, I find myself often trying to code around this. It makes it so I can't use some of the generic views as easily. Forms are a pain because when a field is pulled in as a dropdown option if shows all the records. In reality, I never want to receive all the records back. I would much rather the query check the access table and always just return what a user has access to.
I have seem some mentions about using a middleware change but I would really like to keep this within the manager and query set. It seems like that is what they are there for. However, I can't seem to find a way to reference request.user without passing it in (this causes other changes and messes with all my forms....).
Is way to do this within the manager and queryset?

Real Time Google Analytics API - Identify user session

I'm retreiving event data using Real Time Google Analytics API, so as to trigger responses each time conditions are met - while the user navigates.
This is my actual query on Google Analytics Real Time API (which works perfectly!)
return service.data().realtime().get(
ids='ga:' + profile_id,
metrics='rt:totalEvents',
dimensions='rt:eventAction,rt:eventLabel,rt:eventCategory',
max_results='25').execute()
I'd like to show results grouped by each particular session or user. So as to trigger a message to this particular user if some conditions are met.
Is that possible? And if so, how do apply this criteria to this query?
"Trigger a message to a particular user" would imply that you either have personally identifiable data stored in GA, which would violate Googles TOS, or that you map an anonymous ID (clientid or UserID or similar) to a key stored in an external database (which might be legally murky, depending on your legislation). Since I don't want to throw away the answer I have written before reading your question to the end :-) I am going to assume the latter.
So, is that possible? No, not really. By default GA does not identify neither an identifier for the user (client id or user id) nor for the session (a session identifier is present only in the BigQuery export schema).
The realtime API has a very limited set of dimensions (mostly I think because data aggregation does not happen in realtime), so you can't even use custom dimensions. Your only chance would be to overwrite one of the standard fields, i.e. campaign information.
Of course this destroys the original data in the field. So you should use an extra view for the API query, send a custom dimension with the user identifier along, and then use an advanced filter to copy the custom dimension value to a standard field (while you original data is safe in your other data views). This is a bit hackish, though.
Also the realtime API only displays the current hit per user, so you cannot group by user in the query in any case - you'd need to download and store the data to an external database and do your aggregation there.

How can I approach data split across multiple databases?

I'm putting together a proposal for the development of a web application.
The app is to be launched in multiple countries, and some of the client's partners and (allegedly; I'm no lawyer) some of the countries involved have rules about where personal data can be stored. The upshot is that there is a hard requirement that particular data about certain countries' users is stored on servers in that country. (It sounds like they're OK with me caching data in any country, though -- so I intend to have a Redis in-memory store in the main data centre.) Some of the data (credit card details, for example) will additionally be encrypted, but this seems to make no difference to them in terms of where it can be stored.
With the current set of requirements, users from one country won't actually ever interact with users from another country, so one obvious option is to run different instances of the application in each country, entirely self-contained. This is simpler from an architectural point of view, but harder to manage, and would have overall higher server costs. It might get complicated if for example the client wants reports on all users across all countries, or eventually they want to merge the databases, and users' primary keys have to change. Not impossible, but it'd likely be a pain.
Probably better would be to have a central database with all information the client deems it acceptable to host in a single spot (North America somewhere), and then satellite databases in each country holding the information the client needs to be kept "at home".
So the main database would have the main users table, consisting of only a PK and a country code, and would have lots of other tables. Each local database would have a "user details" table, with a foreign key (to the main users table on the main database) and a bunch of other columns of personally identifiable information, as well as username, email address, password, etc.
The client may then push to have other data stored in the satellite locations, some of which may be one-to-many with a user or many-to-many with a user.
My questions:
How can this be handled with Django? Can it, or should I look at other frameworks?
Can the built-in User model be edited to look in all the satellite databases for the matching User model on log in, and when logged in to retrieve the user data from those databases without too much trouble?
Are there any guidelines you can give me to make sure code stays simple and things stay efficient?
Will this be significantly easier if the satellite database only has one-to-one data with the main User table? I imagine that having one-to-many or many-to-many data in those satellite databases would be a major pain (or at least inefficient), or am I wrong?
To answer your questions accordingly:
Looks like something that you could do in Django (I like Django so I may not be the best to opinion here) - maybe the following will convince you (or not).
A microservice approach? Multiple instances of the "user" resource multiservice each with it's own database (I heard you about the costs but maybe?).
You can do plenty with Django Authentication Backends (including wirting your own) - there is a "remote" auth backend you could use as an example. Read about stateless authentication (JWT).
Look at points 1 and 2.
Consider not using the built-in Django user model is it doesn't suit you.

How to restrict certain rows in a Django model to a department?

This looks like it should be easy but I just can't find it.
I'm creating an application where I want to give admin site access to people from different departments. Those people will read and write the same tables, BUT they must only access rows belonging to their department! I.e. they must not see any records produced by the other departments and should be able to modify only the records from their own department. If they create a record, it should automatically "belong" to the department of the user which created it (they will create records only from the admin site).
I've found django-guardian, but it looks like an overkill - I don't really want to have arbitrary per-record permissions.
Also, the number of records will potentially be large, so any kind of front-end permission checking on a per-record basis is not suitable - it must be done by DB-side filtering. Other than that, I'm not really particular how it will be done. E.g. I'm perfectly fine with mapping departments to auth groups.

Making sharding simple with Django

I have a Django project based on multiple PostgreSQL servers.
I want users to be sharded across those database servers using the same sharding logic used by Instagram:
User ID => logical shard ID => physical shard ID => database server => schema => user table
The logical shard ID is directly calculated from the user ID (13 bits embedded in the user id).
The mapping from logical to physical shard ID is hard coded (in some configuration file or static table).
The mapping from physical shard ID to database server is also hard coded. Instagram uses Pgbouncer at this point to retrieve a pooled database connection to the appropriate database server.
Each logical shard lives in its own PostgreSQL schema (for those not familiar with PostgreSQL, this is not a table schema, it's rather like a namespace, similar to MySQL 'databases'). The schema is simply named something like "shardNNNN", where NNNN is the logical shard ID.
Finally, the user table in the appropriate schema is queried.
How can this be achieved as simply as possible in Django ?
Ideally, I would love to be able to write Django code such as:
Fetching an instance
# this gets the user object on the appropriate server, in the appropriate schema:
user = User.objects.get(pk = user_id)
Fetching related objects
# this gets the user's posted articles, located in the same logical shard:
articles = user.articles
Creating an instance
# this selects a random logical shard and creates the user there:
user = User.create(name = "Arthur", title = "King")
# or:
user = User(name = "Arthur", title = "King")
user.save()
Searching users by name
# fetches all relevant users (kings) from all relevant logical shards
# - either by querying *all* database servers (not good)
# - or by querying a "name_to_user" table then querying just the
# relevant database servers.
users = User.objects.filter(title = "King")
To make things even more complex, I use Streaming Replication to replicate every database server's data to multiple slave servers. The masters should be used for writes, and the slaves should be used for reads.
Django provides support for automatic database routing which is probably sufficient for most of the above, but I'm stuck with User.objects.get(pk = user_id) because the router does not have access to the query parameters, so it does not know what the user ID is, it just knows that the code is trying to read the User model.
I am well aware that sharding should probably be used only as a last resort optimization since it has limitations and really makes things quite complex. Most people don't need sharding: an optimized master/slave architecture can go a very long way. But let's assume I do need sharding.
In short: how can I shard data in Django, as simply as possible?
Thanks a lot for your kind help.
Note
There is an existing question which is quite similar, but IMHO it's too general and lacks precise examples. I wanted to narrow things down to a particular sharding technique I'm interested in (the Instagram way).
Mike Clarke recently gave a talk at PyPgDay on how Disqus shards their users with Django and PostgreSQL. He wrote up a blog post on how they do it.
Several strategies can be employed when sharding Postgres databases. At Disqus, we chose to shard based on table name. Where as the original table name as generated by Django might be comments_post, our sharding tools will rewrite the SQL to query a table comments_post_X, where X is the shard ID calculated based on a consistent hashing scheme. All these tables live in a single schema, on a single database instance.
In addition, they released some code as part of a sample application demonstrating how they shard.
You really don't want to be in the position of asking this question. If you are sharding by user id then you probably don't want to search by name.
If you are sharding your database then it's not going to be invisible to your application and will probably end up requiring schema alterations.
You might find SkyTools useful - read up on PL/Proxy. It's how Skype shard their databases.
it is better to use professional sharding middleware, for example: Apache ShardingSphere.
The project contains 2 productions, ShardingSphere-JDBC for java driver, and ShardingSphere-Proxy for all programing languages. It can support python and Django as well.