Django - How can Django ORM manage user's uploaded tables in database - django

My web application allow users to load/create tables in the Postgres database. I know Django ORM needs a model definition in models.py for each table in the database to access it. How can I access the user's uploaded tables in the app without creating a new model definition on the fly each time a new table is uploaded? I was thinking about creating a generic model definition that decompose the table into its components like this:
models.py
class Table(models.Model):
filename = models.CharField(max_length=255)
class Attribute(models.Model):
table = models.ForeignKey(Table)
name = models.CharField(max_length=255)
type = models.IntegerField()
width = models.IntegerField()
precision = models.IntegerField()
class Row(models.Model):
table = models.ForeignKey(Table)
class AttributeValue(models.Model):
row = models.ForeignKey(Row)
attribute = models.ForeignKey(Attribute)
value = models.CharField(max_length=255, blank=True, null=True)
The problems with such a generic model is that every tables are mixed in 4 table (not useful in admin interface) and its really slow to create when you have a lot of rows. Do you have suggestion with this case?
Edit: Could it be viable to use a separate database to store those tables and use a router and manage.py inspectdb to update its models.py each time a user add or delete a table? (like in this post) I wonder what would happen if two users add a table in the same time?

I think you should look into dynamic models like here:
https://code.djangoproject.com/wiki/DynamicModels
or here:
http://dynamic-models.readthedocs.org/en/latest/
Good luck because its not an easy way my friend :)

You'll probably need to use raw SQL queries for doing this.
If the schema of the tables you are expecting are predefined you can use a database router to link some model to a specific table name for each user.

Related

Using same database tables for different projects in django postgresql

I am new to using Django framework. I want to use the same database(PostgreSQL) tables for 2 different projects. I was able to access the same database from two different projects. But how to access the tables?
Firstly, I have models in project1
class employees(models.Model):
employeeID = models.IntegerFiled()
employeeName = models.CharField(max_length=100)
This would create a project1_employees table in the database. I want to access this table in project2. If I have the same model in the project2 and migrate it creates a new table project2_employees in the same database. These are two entirely different projects.
For the second project change the meta of your model this way:
class employees(models.Model):
employeeID = models.IntegerFiled()
employeeName = models.CharField(max_length=100)
class Meta:
managed = False
db_table = 'project1_employees'
And then make fake migrations:
python manage.py migrate --fake
Use managed = False in the second project model class Meta:. This will prevent the migrate code from creating the table. For more information see:
https://docs.djangoproject.com/en/3.0/ref/models/options/#managed
What you need to consider is whether you think the projects will diverge enough they will end up needing separate databases/tables.

Which database design is better for django follow/unfollow?

I've been looking for a good database design for a twitter like social network site in my django project and I found two possibilities:
This one down here
class Following(models.Model):
follower = models.ForeignKey(User, on_delete=models.CASCADE,
related_name='following')
following = models.ForeignKey(User, on_delete=models.CASCADE,
related_name='followers')
And this other one
class User(AbstractUser):
follows = models.ManyToManyField(settings.AUTH_USER_MODEL, related_name='followed_by')
pass
Are these the same? Is there any difference here? Which one should I choose? I'm kind of new to this so I can`t figure out which one is the best option. I find the first one easier to understand.
If I add this to my user model
following = models.ManyToManyField('self', related_name="followers")
and run (assuming auth is the app where your user model is, and replacing 000X by the number of the generated migration)
python manage.py makemigrations auth
python manage.py sqlmigrate auth 000X
this is what I get:
CREATE TABLE `auth_user_following` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY,
`from_user_id` integer NOT NULL, `to_user_id` integer NOT NULL);
ALTER TABLE `auth_user_following` ADD CONSTRAINT `auth_user__from_user_id_b9318b74_fk_auth_`
FOREIGN KEY (`from_user_id`) REFERENCES `auth_user` (`id`);
ALTER TABLE `auth_user_following` ADD CONSTRAINT `auth_user__to_user_id_b51bc961_fk_auth_`
FOREIGN KEY (`to_user_id`) REFERENCES `auth_user` (`id`);
ALTER TABLE `auth_user_following` ADD CONSTRAINT `auth_user_foll_from_user_id_to_au_88cd5a29_uniq`
UNIQUE (`from_user_id`, `to_user_id`);
So it creates a table with an auto-generated id and two foreign key columns, just as it would do with the explicit relation-only model, i.e. on the database side, there is no structural difference.
For code readability, I would much prefer to keep the relation in the model and not define it in a different class. However, if you want to add additional data to the relation (e.g. date_started_following), you will need an explicit relation model. Then, you might still want to mention this many-to-many-relation in your user model and point to the explicit relation using the through argument:
However, sometimes you may need to associate data with the
relationship between two models.
[...]
Django allows you to specify the model that will
be used to govern the many-to-many relationship. You can then put
extra fields on the intermediate model. The intermediate model is
associated with the ManyToManyField using the through argument to
point to the model that will act as an intermediary.
One other reason for the first approach or an explicit through model is that it might facilitate some queries about the relationship, e.g. "find users who follow each other".
I would suggest both models code will work fine.
If you want to create custom user model with new fields then use below code format.
AbstractUser: Use existing fields in the user model
AbstractBaseUser:In case want to create your own user model from
scratch
class User(AbstractUser):
follows = models.ManyToManyField(settings.AUTH_USER_MODEL, related_name='followed_by')
pass
You want to segregate your app related changes then use below models code.
class Following(models.Model):
follower = models.ForeignKey(User, on_delete=models.CASCADE,
related_name='following')
following = models.ForeignKey(User, on_delete=models.CASCADE,
related_name='followers')

How to setup a flexible django models

I'm new to django. What I'm trying to achieve is when the ProductType combobox is changed, the fields changes to its specific fields then the users inputs using those field and entered to the database. My problem is how to create a flexible model that would accept extra fields
from django.db import models
class Product(models.Model):
"""SKU"""
stock = models.IntegerField(default=None)
class ProductType(models.Model):
product_field = models.ForeignKey(ProductFields, on_delete=models.CASCADE)
name = models.CharField(max_length=255)
class ProductFields(models.Model):
"""Fields of ProductType"""
Here's an DB example I'm trying to achieve See Image
SQL database is not suitable for that purpose.
Look for non-SQL databases for ex. Firebase

Automatic creation of database view from Django

Maybe this question also is similar to something like "automatic execution of raw SQL code just before creating exact one special model in models.py with managed=False".
For example, I have 3 models in models.py (default User, UserTypes and relation between them):
from django.contrib.auth.models import User
class UserTypes(models.Model):
type = models.TextField(unique=True)
class Meta:
db_table = 'user_types'
class UsersHaveTypes(models.Model):
user = models.ForeignKey(User, on_delete=models.CASCADE)
type = models.ForeignKey(UserTypes, on_delete=models.CASCADE)
class Meta:
db_table = 'users_have_types'
unique_together = (("user", "type"), )
I have few types of users one of which is clients. Now I want to create endpoints "/clients", which will work as a usual model (CRUD).
Right now, I just add this in models.py:
class Clients(models.Model):
first_name = models.TextField(blank=True, null=True)
last_name = models.TextField(blank=True, null=True)
# all another fields from User model, which duplicated in the database view
class Meta:
managed = False
db_table = 'clients'
and then I make this by hand in my database backend:
Create database view
Create a function to make Insert, Update
and Delete
Create a trigger to trigger this functions.
It's a bunch of code and there is nothing special, all worked fine.
This all is to make working CRUD on "/clients" endpoint, so, for example, on "Creation", it will create User and automatically add a correct row to users_have_types table, which marks this user as "client".
Is there some more elegant and automatical way to make this? I move my Django project, and I need to create view, function, and trigger in the database backend again, which takes a lot of time (all by hand for every "type of users") and is an ugly decision.
I'm surprised, that nobody asked this question before, cause it's a good style of code database to make some views, and hide real table behind them. I know that Django can't create database view by self, but there must be a way to describe custom SQL code, which is used to create a table in the database. Maybe something with managers, I don't know (I'm a newbie in Django). Of course, it will be specific only for one database backend, but it's fine.

Django rest framework api with existing mysql database

How can I create a Django REST Framework API that connects to an already existing MySQL tables instead of creating them through modela.py. My models.py shows something like this:
class Author(models.Model):
first_name = models.CharField(max_length=20)
last_name = models.CharField(max_length=20)
def __str__(self):
return f'{self.first_name} {self.last_name}'
Instead of this, I need to take data directly from existing tables in MySQL.
For that you need to define same class name as your table name with meta char field
like for example
RandomTable(id INT(10),name varchar(10)) is your existing mysql table then the models.py for it will be
class AppnameRandomTable(models.Model)
id = models.CharField(db_column="id") #name of column of existing db
inside that you will need to write the fields of your existing table name in meta section
class Meta:
db_table = "RandomTable" #your existing mysql table name
time saving hack just create a class in models.py and on terminal run "python manage.py inspectdb" you will automatically get all the column names from there.
You can just copy and paste names from there , because for reading and writing on columns you need to define their variables in your class even if the table is existing mysql table
python manage.py inspectdb > models.py
If you run that command it will create a models.py in the project's root directory. Once you've done that you can either move it directly into the project or create a models folder and break it down into areas of concern from there. You will likely have to do the work of adding related_name = 'foo' to a lot of fields that have relationships with other models. That can be time-consuming but it works.