Alternatives to dynamically creating model fields - django

I'm trying to build a web application where users can upload a file (specifically the MDF file format) and view the data in forms of various charts. The files can contain any number of time based signals (various numeric data types) and users may name the signals wildly.
My thought on saving the data involves 2 steps:
Maintain a master table as an index, to save such meta information as file names, who uploaded it, when, etc. Records (rows) are added each time a new file is uploaded.
Create a new table (I'll refer to this as data tables) for each file uploaded, within the table each column will be one signal (first column being timestamps).
This brings the problem that I can't pre-define the Model for the data tables because the number, name, and datatype of the fields will differ among virtually all uploaded files.
I'm aware of some libs that help to build runtime dynamic models but they're all dated and questions about them on SO basically get zero answers. So despite the effort to make it work, I'm not even sure my approach is the optimal way to do what I want to do.
I also came across this Postgres specifc model field which can take nested arrays (which I believe fits the 2-D time based signals lists). In theory I could parse the raw uploaded file and construct such an array and basically save all the data in one field. Not knowing the limit of size of data, this could also be a nightmare for the queries later on, since to create the charts it usually takes only a few columns of signals at a time, compared to a total of up to hundreds of signals.
So my question is:
Is there a better way to organize the storage of data? And how?
Any insight is greatly appreciated!

If the name, number and datatypes of the fields will differ for each user, then you do not need an ORM. What you need is a query builder or SQL string composition like Psycopg. You will be programatically creating a table for each combination of user and uploaded file (if they are different) and programtically inserting the records.
Using postgresql might be a good choice, you might also create a GIN index on the arrays to speed up queries.
However, if you are primarily working with time-series data, then using a time-series database like InfluxDB, Prometheus makes more sense.

Related

Django - Determine model fields and create model at runtime based on CSV file header

I need to determine the best approach to determine the structure of my Django app models at runtime based on the structure of an uploaded CSV file, which will then be held constant once the models are created in Django.
I have come across several questions relating to dynamically creating/altering Django models at run-time. The consensus was that this is bad practice and one should know before hand what the fields are.
I am creating a site where a user can upload a time-series based csv file with many columns representing sensor channels. The user must then be able to select a field to plot the corresponding data of that field. The data will be approximately 1 Billion rows.
Essentially, I am seeking to code in the following steps, but information is scarce and I have never done a job like this before:
User selects a CSV (or DAT) file.
The app then loads only the header row in (these files are > 4GB).
The header row is split by ",".
I use the results from 3 to create a table for each channel (columns), with the name of the field the same as the individual header entry for that specific channel.
I then load the corresponding data into the respective tables and I ahve my models for my app that will then not be changed again.
Another option I am considering is creating model with 10 fields, as I know there will never be more than 10 channels. Then reading my CSV into the table when a user loads a file, and just having those fields empty.
Has anyone had experience with similar applications?
That are allot of records, never worked with so many. For performance the fixed fields idea sounds best. If you use PostgreSQL you could look at the JSON field but don't know the impact on so many rows.
For flexible models you could use the EAV pattern but this works only for small data sets in my experience.

Model / View: Filtering data beforehand in database or at runtime in proxy model?

Imagine an applications that displays data from a sqlite database.
The app is making use of model/view programming.
It can have multiple views acting in parallel on different subsets of the same data (subsets made by filtering the required data types).
(Sidenote: I am using Qt, so there is no controller part, of course, but I did not find a more suitable tag.)
I am not sure which approach to take:
1a. Load all database data into one single model
1b. Then apply the model to all views, filtering the data inside the view with a proxy model
2a. One model for each view, but filtering done inside sqlite database.
Pros/Cons:
Idea 1:
(+) one model, makes use of model/view advantages (e.g. updating all connected views)
(-) memory usage could get huge because all data is loaded into a model, but only a subset is shown
Idea 2:
(+) theoeretically lower memory usage because only the filtered data is loaded from the database
(-) the views can have filters that could lead to intersecting data, meaning the same data would be stored in more than one model -> perhaps practically even bigger memory usage than in Idea 1
The data being loaded here is just case metadata, e.g. title, description, datetime and so on. Bigger data like images, files are not being loaded here. So as the database could indeed grow big (big for this kind of application, say 200 gb for power users), this does not affect the topic of the present question, because the metadata is much, much smaller and is proportional to overall data count, not data size.
Do you have practical experience with such a configuration and can suggest which one to use? It seems to me that Idea 1 is the way to go, but I am not sure about it.
In my experience, the less data is loaded from the database into memory, the better. It is not just the memory usage, but also startup time. If the data is delivered over the network, loading a few gigabytes can take forever.
So I would go for a variant of your second solution, where each table view has its own model. The model is an implementation of QAbstractItemModel that lazily fetches only the rows that currently need to be displayed. The models could, however, share a common cache. This will also make sure that they all display the same data where it intersects.

Ember index data -vs- show data

How do people deal with index data (the data usually shown on index pages, like a customer list) -vs- the model detail data?
When somebody goes to the customer/index route -- they only need access to a small subset of the full customer resource. Since I am dealing with legacy data, my customer model has > 10 relationships. It seems wasteful to have the api return a complete and full customer representation for every customer just to render a list/select/index view.
I know those relationships are somewhat lazy-loaded, but it still takes effort on the backend to pull all those relationships in. For some relationships (such as customer->invoices) this could be a large list of ids.
I feel answers to this can be very opinionated. But my two cents:
The API you are drawing on for your data should have an end-point to fetch the subset of data you're interested in, e.g. /api/mini-customer vs /api/customer.
You can then either define two separate models (one to represent the model in the list and one to represent the detailed view), or simply populate the original model with the subset of data and merge the extra data in at a later point.
That said, I've also seen plenty of cases such as the one you describe, where you load all data initially and just display the subset to begin with. If it's reasonable that the data will eventually be used and your page-load constraints can handle it, then this can be an acceptable approach.

How do I manage SAS formats from various sources?

I am wondering how I can efficiently manage formats in SAS for a reporting office that takes in data from various sources, some with proper lookup tables / metadata, and some without.
For data sources that have proper metadata, joining tables for value descriptions works fine, but when metadata doesn't exist and needs to be maintained separately, how should that be done? Some straightforward examples/ideas:
Plain .sas files with a native PROC FORMAT step that is maintained separately.
External files (e.g., Excel, CSV) that are maintained separately and imported into SAS to create a format library.
Database tables maintained separately that can be read from to create a format library.
In addition to just the formatted values, managing value changes (i.e., effective dates for certian values) is also a concern.
Any help in conventions or standards that work well for this type of task is greatly appreciated.
I'm not sure there's a single best solution here - it depends largely on your environment, your users, etc.
If you have fairly naive users, then I'd definitely recommend a single complete repository if possible; whether that is a .sas7bcat file if you are using a single SAS version/OS/bitness, or a ready-made table/dataset to input into PROC FORMAT (and a .sas file included in their autoexec to do the importing). The biggest drawbacks to this are that you have to manage it actively (you cannot allow users to write their own formats to the master format dataset, for example, as they may overwrite other ones), and that there will be additional work to ensure format names do not conflict - YNF. might be 1=YES 2=NO or 1=YES 0=NO or something else. This also doesn't allow you to very easily handle effective dates; but it's possible this is better for your users (and then just handle the documentation separately).
If you have more advanced users, then you might consider a table/dataset that is more relational in nature. A hybrid approach might include a dataset with columns:
Dataset Name (qualified as needed to ensure uniqueness)
Format Name
Start
Label
Other elements (Type, HLO, etc.)
Effective date
That would allow users to make their own modifications (assuming you trust them enough to add dataset name properly, anyway - or set up a stored proc to do the adding from a temp table that checked for conflicts) and allow you to handle format names that conflicted. You'd still have to have a way for the user to handle using multiple datasets, if that's necessary (such as by adding some unique element to the format name by default, like 'dataset ID').
In my mind, however, the best option is using a data dictionary to handle the metadata, which combines self-documentation with metadata management. Similar to above, you have a table with dataset and format elements, but add columns for descriptive text (question description, for example) and other useful information, depending on your use cases. This can be maintained in a database table or dataset, or perhaps more usefully in an excel or similar document that can be shared with non-programmers and easily edited. I use this method for several projects, and it has paid off by allowing my users to help write the documentation for my code, keeping my programs accurate and up to date, while minimizing back-and-forth discussions of updates. I just import the spreadsheet and run a proc format each time I run my data.
You can then have one spreadsheet per dataset, one tab, or one full spreadsheet with all datasets in them - whichever is easiest to use. This easily handles 'effective date' type issues as well - or even versioning, as that can be handled in the spreadsheet.

Design pattern for caching dynamic user content (in django)

On my website I'm going to provide points for some activities, similarly to stackoverflow. I would like to calculate value basing on many factors so each computation for each user will take for instance 10 SQL queries.
I was thinking about caching it:
in memcache,
in user's row in database (so that wherever I need to get user from base I easly show the points)
Storing in database seems easy but on other hand it's redundant information and I decided to ask, since maybe there is easier and prettier solution which I missed.
I'd highly recommend this app for storing the calculated values in the model: https://github.com/initcrash/django-denorm
Memcache is faster than the db... but if you already have to retrieve the record from the db anyway, having the calculated values cached in the rows you're retrieving (as a 'denormalised' field) is even faster, plus it's persistent.