Connecting Two Django Apps, but Keeping them Separated - django

I have two distinct Django Apps, each one completely working separated. Those Apps are "PointOfSale" and "Inventory". What occurs is that both of these Apps have a "Product" table, in which the products are inserted. Now, keeping those two tables filled with basic the same data is obvious redundant (actually, the Inventory table of Products have more fields).
I'm now contemplating the different merge strategies to deal with this issue and I would like some help. Ideally, I would like to keep the possibility of each App working independently. So, I envisioned the following scenarios:
A Save Signal that keep the two tables insync
The connection of the two with OneToOne connection
Generic Relationship
The creation of a third App, called Products, in which the tables Products and CategoryOfProducts are kept, and both the Apps PointOfSale and Inventory uses it as a prerequisite.
MErge everything in one big App.
Another that I couldn't think of.
Thanks

I think the best way is to use the same database table from both Product models. See how you can have multiple databases in apps: https://docs.djangoproject.com/en/1.10/topics/db/multi-db/#an-example
So say AppA uses DB_A and AppB uses DB_B as databases for their unique models and that Product model/table is in DB_A. You can configure AppB to use DB_A for Product model while keeping DB_B for other models.
Ideally, if the models are the same, you can package Product app so that the same code is used in all projects.

Related

Suggestions for splitting up a Django model of 1000+ fields

I am looking for suggestions on how to deal with a database table that has 1000 or more columns and I am tying to translate it into one or many Django models. The data also needs to be filtered through API calls within the URL. Every field needs to be able to filter the rest of the data.
I have come up with a few solutions and would like input or resources related to them:
Just have a model with 1000+ fields - This seems like it would be a nightmare to maintain and would require a lot of brute force coding but would likely work fine if data was selectively returned.
Use a JSON field to store all less frequently accessed data - The issue here would be difficulty in filtering the data using Django Filters.
Split data into related models connected by One-to-One relationships, as I understand this cuts down on join operations. - This would seem to require more coding than the first option but would be more maintainable.
Does anyone have any information or resources on dealing with database tables of this size?
You should absolutely split the model into multiple linked models.
Because of how Django models data in the database you should generally have a 1:1 relationship between models and tables, and your tables should be normalized to at least the third normal form.

Need guidance with creating Django based dashboard

I'm a beginner at Django, and as a practice project I would like to create a webpage with a dashboard to track investments in a particular p2p platform. They do not have a nice dashboard (but provide excel file with all data). As I see it, main steps that I need to do in this project are as follow:
Create login so that users would have account where they upload their excel files.
Make it possible to import excel file to a database
Manipulate/calculate data for it to be later used in dashboard
Create dashboard.
Host webpage.
After some struggle I have implemented point no. 2, and will deal with 1 and 5 later. But number 3 is my biggest issue now.
I'm completely unsure what I need to do, and google did not help. I need to calculate data before I can make dashboard from it. Union two of the tables, and then join them together with a third table, creating some additional needed calculated fields. Do I create a view in the database and somehow fetch this data to Django? Or do I need to create some rules so that new table would be created at the time of the import? I think having table instead of a view would have better performance. Or maybe I'm doing it completely wrong, and should take completely different approach for this kind of task? Also, is SQLite a good database for a task (I'm using it, because it was a default in Django)?
I assume for vizualization part I will need to do it with some JavaScript library, such as D3? Which then would use data from step 3.
For part 3 there is 2 way, either do these stuff and save the result in your database or you can do it when you need it using django model features like annotation, aggregation and etc.
Option 1 requires to add a table for you calculation which is Models in django.
Option 2 requires to create a doing the annotations in a view or model managers and then using them in views.
Django docs: Aggregation
Which is the best is depended on how big your data is, how complicated the calculation is and how often you need them.
And for database; SQLite is just a database for development use not the production and surly not with a lot of data and a lot of calculations. The recommended database for django is postgresql which is pretty good at handling millions and even billions of data and doing heavy calculation.
And for vizualization you should handle it on the template side which is basically HTML, CSS and JS.

How to know where database has changed

I have a project that looks like a simple shopping site that sells different kinds of products. For example, I have 4 models: Brand, Product, Consignment. Consignment is linked to Product, and Product is linked to Brand. To reduce count of queries to databases, I want to save current state of these models(or at least some of them). I want to do it, because I show a sidebar with brands and products. So every time when user opens some page, it will execute the query to database to get those brands and products.
But when admin add some new product or brand, I want to handle database changing and resave it. How to implement it?
Your answer is by using Cache. Cache is a method to store your objects in memory/other app like redis temporarily so that you do not need send queries to database. You can read the full description here.
Or, you can use this third party library that helps you to cache Django ORM Model. Here are the example.
Brand.objects.filter(name='stackoverlow').cache()
After doing an update to the model, you need to clear or invalidate the cache.
invalidate_model(Brand)

Warehousing records from a flat item table: Django Signals or PostgreSQL Triggers?

I have a Django website with a PostgreSQL database. There is a Django app and model for a 'flat' item table with many records being inserted regularly, up to millions of inserts per month. I would like to use these records to automatically populate a star schema of fact and dimension tables (initially also modeled in the Django models.py), in order to efficiently do complex queries on the records, and present data from them on the Django site.
Two main options keep coming up:
1) PostgreSQL Triggers: Configure the database directly to insert the appropriate rows into fact and dimensional tables, based on creation or update of a record, possibly using Python/PL-pgsql and row-level after triggers. Pros: Works with inputs outside Django; might be expected to be more efficient. Cons: Splits business logic to another location; triggering inserts may not be expected by other input sources.
2) Django Signals: Use the Signals feature to do the inserts upon creation or update of a record, with the built-in signal django.db.models.signals.post_save. Pros: easier to build and maintain. Cons: Have to repeat some code or stay inside the Django site/app environment to support new input sources.
Am I correct in thinking that Django's built-in signals are the way to go for maintaining the fact table and the dimension tables? Or is there some other, significant option that is being missed?
I ended up using Django Signals. With a flat table "item_record" containing fields "item" and "description", the code in models.py looks like this:
from django.db.models.signals import post_save
def create_item_record_history(instance, created, **kwargs):
if created:
ItemRecordHistory.objects.create(
title=instance.title,
description=instance.description,
created_at=instance.created_at,
)
post_save.connect(create_item_record_history, sender=ItemRecord)
It is running well for my purposes. Although it's just creating an annotated flat table (new field "created_at"), the same method could be used to build out a star schema.

How to create django models Dynamically

My django application need to collect user data(name age country etc) based on his email domain( 'gmail' as in xyz#gmail.com).I wist to create a new table every time i encounter a new email domain.
Can this be done in django ?
This is a bad idea. Your tables would all have the same structure. All of your data should be stored in a single table, with a domain column to keep the data separate. Why would you want a different table for each domain? Whatever reason you have, there's a better way to do it.
This idea goes against everything in the design of the relational database, and the Django ORM on top of it.