The following image shows a rough draft of my proposed database structure that I will develop for Django. Briefly, I have a list of ocean Buoys which have children tables of their forecast conditions and observed conditions. I'd like Users to be able to make a log of their surf sessions (surfLogs table) in which they input their location, time of surf session, and their own rating.
I'd like the program to then look in the buoysConditions table for the buoy nearest the user's logged location and time and append to the surfLog table the relevant buoyConditions. This will allow the user to keep track of what conditions work best for them (and also eventually create notifications for the user automatically).
I don't know what the name for this process of joining the tables is, so I'm having some trouble finding documentation on it. I think in SQL it's termed a join or update. How is this accomplished with Django?
Thanks!
Related
I have a table linked and getting data from Zapier. The records come 200rows per minute. Logged-in users are supposed to be picking these data and working on them. The problem is that there has been some confusion as to know which record has been worked on and which one has not been worked on.
I have built a table to help show records which have been worked on by ticking the Treated tab Table structure.
I hope I can divide the records equally amongst the logged-in Users so as not to underwhelm or overwhelm any of the users.
I'm currently trying to build a web app that would allow many users to query an external API (I cannot retrieve all the data served by this API at regular intervals to populate my PostgreSQL database for various reasons). I've read several thing about ACID and MVCC but still, I'm not sure there won't be any problem if several users are populating/reading my PostgreSQL database at the very same time. So here I'm asking for advice (I'm very new to this field)!
Let's say my users query the external API to retrieve articles. They make their search via a form, the back end gets it, queries the api, populates the database, then query the database to return some data to the front end.
Would it be okay to simply create a unique table to store the articles returned by the API when users are querying it ?
Shall I rather store the articles returned by the API and associate each of them to the user that requested it (the Article model will contain a foreign key mapping to a User model)?
Or shall I give each user a table (data isolation would be good but that sounds very inefficient)?
Thanks for your help !
Would it be okay to simply create a unique table to store the articles returned by the API when users are querying it ?
Yes. If the articles have unique keys (doi?) you could use INSERT...ON CONFLICT DO NOTHING to handle the (presumably very rare) case that an article is requested by two people nearly simultaneously.
Shall I rather store the articles returned by the API and associate each of them to the user that requested it (the Article model will contain a foreign key mapping to a User model)?
Do you want to? Is there a reason to? Do you care who requested each article? It sounds like you anticipating storing only the first person to request each article, and not every request?
Or shall I give each user a table (data isolation would be good but that sounds very inefficient)?
Right, you would be hitting the API a lot more often (assuming some large fraction of articles are requested more than once) and storing a lot of duplicates. It might not even solve the problem, if one person hits "submit" twice in a row, or has multiple tabs open, or writes a bot to hit your service in parallel.
I would like to create a market place like app with Djano as the backend server, where users can buy/sell items. In the app I would like have to a feature related to geographic region of a user. Such as, to filter out items in a given specific miles of radius.
Example use case:
User uploads an item, get the gps cordinates from their mobile and store in db.
User can search item, also filter to only get items in X miles radius.
For this feature
I have looked at GeoDjango. But it seems like I need to extend the postgresql database to use it, also by using the postgis engine.
I have also looked at the Haversine formula for nearby queries.
There is also an option for multiple database support.
But I have some initial doubts before proceeding and your insights would really help me alot. Could you please help me with this queries:
I will have to store user data and some other data including the geo location. Will there be any difference/side effects between postgresql_psycopg2 and postgis, to store all the data in one single db?
For my simple use case would you rather prefer to go with the Haversine formula? Or integrating GeoDjango will help me lot in the future?
Or having a multiple database support be better for me or it will be an over head?
point
1. postgresql_psycopg2 and postgis is difference that posgis has inbuilt functionality for the location distance and radius calculation, so postgis is good to go.
For multiple database it depends up on how many user you will have, for initial phase of project you can got with one db, in future you can improve that.
A little background. I've been developing the core code of an application in python, and now I want to implement it as a website for the user, so I've been learning Django and have come across a problem and not sure where to go with it. I also have little experience dealing with databases
Each user would be able to populate their own list, each with the same attributes. What seems to be the solution is to create a single model defining the attributes etc..., and then the user save records to this, and at the same time very frequently changing the values of the attributes of the records they have added (maybe every 5~10 seconds or so), using filters to filter down to their user ID. Each user would add on average 4000 records to this model, so say just for 1000 users, this table would have 4 million rows, 10,000 users we get 40million rows. To me this seems it would impact the speed of content delivery a lot?
To me a faster solution would be to define the model, and then for each user to have their own instance of this table of 4000ish records. From what I'm learning this would use more memory and disk-space, but I'd rather get a faster user experience as my primary end point.
Is it just my thinking because I don't have experience with databases? Or are my concerns warranted and I should find a solution as to how to be able to do the latter?
This post asked the same question I believe, but no solution on how to achieve it. How to create one Model (table) for each user on django?
I'm looking for a way to divide tables with logs of user actions in Django. The site features e-books, with reader and I need to track views in catalog, what users read and what they buy. I'm afraid that these tables might be too big to be useful.
For example Piwik analytics solves this with database tables where each one table corresponds to one month. I would like to do the same with Django. But as I can see there is no way to do it. Is there some similar way to Django ORM to accomplish this?