Django fixture creation, ignoring relations between objects - django

I'm testing views in a django app. There are a lot of OneToMany and ManyToMany relations between models (users, departments, reports, etc.) It takes a lot of time upfilling certain fields like name, surname date of birth etc. while creating fixture which I dont use at all. How can I ignore them? Also what are the best practices while creating a fixture? Mine lools like this
class TestReportModel(TestCase):
allow_database_queries = True
#classmethod
def setUpTestData(cls):
cls.report_id = 99
cls.factory = RequestFactory()
cls.user_with_access = User.objects.create(username="user1", password="password")
cls.employee = Employee.objects.create(user=cls.user_with_access, fio="name1 surname1",
date_of_birth="2012-12-12")
cls.indicator = Indicator.objects.create(context_id=10, set_id=10)
cls.ife = IndicatorsForEmployees.objects.create(employee=cls.employee, indicator=cls.indicator)
cls.report = Report.objects.create(owner=cls.ife)
cls.report.id = cls.report_id
cls.report.save()
cls.user_with_no_access = User.objects.create(username="user_with_no_access", password="password")
cls.employee_with_no_access = Employee.objects.create(user=cls.user_with_no_access, fio="name2 surname2",
date_of_birth="2018-12-12")

It sounds like you need to specify a test database in your settings file, load a fixture with syncdb, then use the keepdb flag.
In your settings file, you can specify a test database name within databases.
https://docs.djangoproject.com/en/2.0/ref/settings/#std:setting-DATABASE-TEST
If this database is not found, it will be created when you run the tests. Once your fixture is created, you can use syncdb to load that db.
https://code.djangoproject.com/wiki/Fixtures#Fixtures
Then when you run your unit tests, pass --keepdb with it and the database will persist between tests.
https://docs.djangoproject.com/en/2.0/ref/django-admin/#cmdoption-test-keepdb

Related

Can't db.drop_all() when creating tabes with SqlAlchemy op.create_table

I'm building a Flask service that uses SqlAlchemy Core for database operations, but I'm not using the ORM- just dispatching raw SQL to the PostgreSQL db. In order to track database migrations I'm using Alembic.
My migration looks roughly like this:
def upgrade():
# Add the ossp extenson to enable use of UUIDs
add_extension_command = 'create EXTENSION if not EXISTS "uuid-ossp";'
bind = op.get_bind()
session = Session(bind=bind)
session.execute(add_extension_command)
# Create tables
op.create_table(
"account",
Column(
"id", UUID(as_uuid=True), primary_key=True, server_default=text("uuid_generate_v4()"),
),
Column("created_at", DateTime, server_default=sql.func.now()),
Column("deleted_at", DateTime, default=None),
Column("modified_at", DateTime, server_default=sql.func.now()),
)
This works great in general- the main issue I'm having is with testing. After each test, I want to be able to drop and rebuild the DB to clean out the data. To do this, I'm using PyTest, and created the following App fixture:
#pytest.fixture
def app():
app = create_app("testing")
with app.app_context():
db.init_app(app)
Migrate(app, db)
upgrade()
yield app
db.drop_all()
The general idea here was that each time we need the app context, we apply the database migrations, yield the app, then when the test is done we drop all the tables.
The issue is, db.drop_all() does nothing. I believe this is because the db object is not bound to any MetaData. The research I did lead here, which mentions that the create_table command does not create MetaData, which I assume is why the app is not aware of which tables are available to drop.
I'm a bit stuck here as to what the right path forward is. Should I change how I'm building these migrations? Is this not the right pattern to make sure I remove test data from the DB between tests?

How can I initialise group names in Django every time the program runs?

I have this code and I want it to just create the groups every time the program runs so that if the database is deleted it will still be a sufficient program itself and someone won't have to create groups again, do you know an easy way to do this?
system_administrator = Group.objects.get_or_create(name='system_administrator')
manager = Group.objects.get_or_create(name='manager')
travel_advisor = Group.objects.get_or_create(name='travel_advisor')
If you lose your DB, you'd have to rerun migrations on a fresh db before the program could run again. So I think data migrations might be a good solution for this? A data migration, is a migration that runs python code to alter the data in the DB, not the schema as a normal migration does.
You could do something like this:
In a new migration file (you can run python manage.py makemigrations --empty yourappname to create an empty migration file for an app)
def generate_groups(apps, schema_editor):
Group = apps.get_model('yourappname', 'Group')
Group.objects.get_or_create(name="system_administrator")
Group.objects.get_or_create(name="manager")
Group.objects.get_or_create(name="travel_advisor")
class Migration(migrations.Migration):
dependencies = [
('yourappname', 'previous migration'),
]
operations = [
migrations.RunPython(generate_groups),
]
Worth reading the docs on this https://docs.djangoproject.com/en/3.0/topics/migrations/#data-migrations
You can do it in the ready method of one of your apps.
class YourApp(Appconfig):
def ready(self):
# important do the import inside the method
from something import Group
Group.objects.get_or_create(name='system_administrator')
Group.objects.get_or_create(name='manager')
Group.objects.get_or_create(name='travel_advisor')
The problem with the data migrations approach is that it is useful for populate the database the first time. But if the groups are deleted once the data migration has run, you will need to populate them again.
Also remember that get_or_create return a tuple.
group, created = Group.objects.get_or_create(name='manager')
# group if an instance of Group
# created is a boolean

Database Error on Dotcloud, Postgres Django

I'm deploying my django app with Dotcloud.
I use Postgres as DB.
I had a new model to my app, and I wanted to flush and syncdb the db. Everything works find when I do it. My new model, named 'Competition' appears in my admin.
The problem is that an other model, Match, has a ForeignKey with the model Competition. And when I go to 'Matchs' in my admin, I get this error:
DatabaseError at /admin/myproject/match/
column myproject_match.competition_id does not exist
LINE 1: ...team_challenger_id", "sportdub_match"."sport_id", "sportdub_...
Any idea on why the syncdb didn't make it work find?
Thank you for your help.
EDIT: My two models
class Competition(models.Model):
name = models.CharField(max_length=256)
comp_pic = models.ImageField(upload_to="comp_pics/")
def __unicode__(self):
return self.name
class Match(models.Model):
team_host = models.ForeignKey(Team, related_name='host_matches')
team_challenger = models.ForeignKey(Team, related_name= 'challenger_matches')
sport = models.ForeignKey(Sport)
competition = models.ForeignKey(Competition)
manage.py syncdb will only create missing tables. If a table already exists, but with an invalid definition, it will not be updated. This is very probably the problem that you are experiencing.
There are at least three ways to solve the problem.
The easy way: use manage.py reset to effectively drop+recreate all the tables. Easy, but of course you will lose the data.
The lazy way: install django_extensions and use manage.py sqldiff. It will show you the difference between the current structure of the database, and the expected structure. sqldiff will nicely show you SQL statements which can update the existing database to conform with your models. You can then execute those statements in a SQL shell.
The clean way: use a migrations framework like south to handle database schema updates. Of course, if you are early in the development, this is overkill (you do not want to write database migrations each time you add/change/remove a field while you're doing local development!) but if your project will have a longer life span, I definitely recommend checking out south.

Django south migration error with unique field in postgresql database

Edit: I understand the reason why this happened. It was because of the existence of `initial_data.json` file. Apparently, south wants to add those fixtures after migration but failing because of the unique property of a field.
I changed my model from this:
class Setting(models.Model):
anahtar = models.CharField(max_length=20,unique=True)
deger = models.CharField(max_length=40)
def __unicode__(self):
return self.anahtar
To this,
class Setting(models.Model):
anahtar = models.CharField(max_length=20,unique=True)
deger = models.CharField(max_length=100)
def __unicode__(self):
return self.anahtar
Schema migration command completed successfully, but, trying to migrate gives me this error:
IntegrityError: duplicate key value violates unique constraint
"blog_setting_anahtar_key" DETAIL: Key (anahtar)=(blog_baslik) already
exists.
I want to keep that field unique, but still migrate the field. By the way, data loss on that table is acceptable, so long as other tables in DB stay intact.
It's actually the default behavior of syncdb to run initial_data.json each time. From the Django docs:
If you create a fixture named initial_data.[xml/yaml/json], that fixture will be loaded every time you run syncdb. This is extremely convenient, but be careful: remember that the data will be refreshed every time you run syncdb. So don't use initial_data for data you'll want to edit.
See: docs
Personally, I think the use-case for initial data that needs to be reloaded each and every time a change occurs is retarded, so I never use initial_data.json.
The better method, since you're using South, is to manually call loaddata on a specific fixture necessary for your migration. In the case of initial data, that would go in your 0001_initial.py migration.
def forwards(self, orm):
from django.core.management import call_command
call_command("loaddata", "my_fixture.json")
See: http://south.aeracode.org/docs/fixtures.html
Also, remember that the path to your fixture is relative to the project root. So, if your fixture is at "myproject/myapp/fixtures/my_fixture.json" call_command would actually look like:
call_command('loaddata', 'myapp/fixtures/my_fixture.json')
And, of course, your fixture can't be named 'initial_data.json', otherwise, the default behavior will take over.

Django-Python/MySQL: How can I access a field of a table in the database that is not present in a model's field?

This is what I wanted to do:
I have a table imported from another database. Majority of the columns of one of the tables look something like this: AP1|00:23:69:33:C1:4F and there are a lot of them. I don't think that python will accept them as field names.
I wanted to make an aggregate of them without having to list them as fields in the model. As much as possible I want the aggregation to be triggered from within the Django application, so I don't want to resort to having to create MySQL queries outside the application.
Thanks.
Unless you want to write raw sql, you're going to have to define a model. Since your model fields don't HAVE to be named the same thing as the column they represent, you can give your fields useful names.
class LegacyTable(models.Model):
useful_name = models.IntegerField(db_column="AP1|00:23:69:33:C1:4F")
class Meta:
db_table = "LegacyDbTableThatHurtsMyHead"
managed = False # syncdb does nothing
You may as well do this regardless. As soon as you require the use of another column in your legacy database table, just add another_useful_name to your model, with the db_column set to the column you're interested in.
This has two solid benefits. One, you no longer have to write raw sql. Two, you do not have to define all the fields up front.
The alternative is to define all your fields in raw sql anyway.
Edit:
Legacy Databases describes a method for inspecting existing databases, and generating a models.py file from existing schemas. This may help you by doing all the heavy lifting (nulls, lengths, types, fields). Then you can modify the definition to suit your needs.
python manage.py inspectdb > legacy.py
http://docs.djangoproject.com/en/dev/topics/db/sql/#executing-custom-sql-directly
Django allows you to perform raw sql queries. Without more information about your tables that's about all that I can offer.
custom query:
def my_custom_sql():
from django.db import connection, transaction
cursor = connection.cursor()
# Data modifying operation - commit required
cursor.execute("UPDATE bar SET foo = 1 WHERE baz = %s", [self.baz])
transaction.commit_unless_managed()
# Data retrieval operation - no commit required
cursor.execute("SELECT foo FROM bar WHERE baz = %s", [self.baz])
row = cursor.fetchone()
return row
acessing other databases:
from django.db import connections
cursor = connections['my_db_alias'].cursor()
# Your code here...
transaction.commit_unless_managed(using='my_db_alias')