Is there any way I can use my default local database for testing in Django 1.9. I also don't want to run any migrations, and I want to test it locally.
The reason I want to do it this way is that in my migrations, I have a data migration referring to some entry from a model and when tests run and create a test_database the migrations fail as there are no entries in the test model and this data migration use .get()
I don't know how I should resolve this issue. The best way I could think of is my default database for testing.
Related
I am currently writing test cases for views, Which eventually uses database also.
By default a test database is being created and removed after test are run.
As the database itself is development database, I don't want my test to create a separate db but use exciting only.
Also I will like to bring to your notice, that in my environment, database are created and provided and django can't run migration or create database.
How can I create unittest which uses real database ?
I think the primary reason for this question is because your django database user is not provided with the create/drop database permission.
Django needs to create and drop a test database for the purpose of unit testing. It cannot use an existing database for this purpose. Why we are not allowed to use an existing database in the unit test is because, the data can be modified by anyone who has the same database permission and django may not have control over the updates they make, This might end up in an unsuccessful unit test.
This is clearly explained in another question's answer
If your DB Admin can provide your Django user the required access for the Test module to work as expected, You can make use of the Fixtures. Fixtures are like data files, can be created from your development environment and then can be used in the Unit test Setup to import the data from Fixtures to the test database created by Django.
The ultimate purpose of any Unit test framework will be to test the functionality of the Back end code logic with a data which is not likely to change. As mentioned in the above links, The Functional testing and Regression Testing is there to cover the real database.
For more details on Fixtures visit Using Fixtures with Django Test Cases
We have a django app that is running in production.
It used to run django 1.4.3 against a mysql database.
We had all our migration scripts using django south.
We recently upgraded to django 1.11.6. Along with it we moved the data to a postgres database.
The app runs fine, but the migration scripts that are generated using django migrations by using the django models are not fully consistent with the existing schema.
Almost all of the differences seem to be in the index names that are generated.
What are our options to make the django migrations and the database consistent? How do we go forward with this?
Should we generate a new empty database using the django migrations, and migrate the data from the old to the new empty database?
I know we can edit the models.py and set the index names manually but that is too cumbersome, we need to edit hundreds of models; is there an easy way to do that?
Is there a way I can generate the migration scripts from the existing database, and verify if the models are compatible?
I run migration on server in this way:
Upload models.py file to server with some new field sfield im model Mobject
Perform makemigration command in manage.py
perform migrate command in manage.py
But there are some requests between end of first step and end of third step which are failed with django.core.exceptions.FieldDoesNotExist: Mobject has no field named 'sfield' (Which is obvious, becouse django ORM can't fetch this field from DB but field already in Class, so django will try to do it)
Is it possible to make all 3 steps "Atomic"? Or globaly ignore this exceptions, becouse for now I don't need sfield, I only want perform migration without Exceptions. Or may be I can temporarely mark new field in some way to prevent django fetching it from DB, but it must be visible for makemigrations/migrate?
if you do select * from yourtable then django tries to fetch all fields defined in the model.
you can use only() in your orms to select specific fields, so that no exception will be raised while migrating new fields that are not used in orm yet
btw, you should create migration files locally, test the new field on your local machine and then commit the migration files to server. In server while deploying, you then need only migrate right after deployment, which makes the time shorter where exceptions can happen.
from the django docs:
The reason that there are separate commands to make and apply
migrations is because you’ll commit migrations to your version control
system and ship them with your app; they not only make your
development easier, they’re also useable by other developers and in
production.
EDIT: The issue described below was caused not by bad workflow but by an apparent bug when loading fixtures. One of my apps had the fixture initial_data.json. The testing framework loads the fixture before performing the necessary migrations. (FWIW, I'm using Django 1.7 + python3.4) This issue is described here. (My workaround: rename the fixture to data.json.)
I'll leave the rest of the post intact in case this helps someone else in the future.
I'm trying to use Django's built-in tests to rapidly test my Django models during development. Unfortunately, when I try this, I get the error:
psycopg2.ProgrammingError: relation "app_relation" does not exist
The workflow I was imagining was
Define a few model fields (possibly across apps)
Test logic using Django tests
Fix logic errors, modify fields, and iterate this process.
This way, I can build my models incrementally without creating a large number of migrations. Migrations create headaches for me because I frequently add, remove, and rename fields or models as I'm validating my logic.
For example, my model has demographic fields, and I'm not sure whether I should keep an male_under_18 field or split it up into male_under_5, male_5_to_9, male_10_to_15, and male_16_to_18 granularity.
It sure would be nice to verify the decision using tests.py before making my migrations.
My understanding was that Django's manage.py test myapp created a database independent of the development database, and so doesn't require that my development database match the current schema defined by my models.
If the workflow above is impossible (or downright silly), I'm open to other ways to solving my problem.
Related question: django unit tests without a db. (Doesn't work because I want to test the DB!)
I have set up a database router to direct different apps and different models to different databases using the db_for_read and db_for_write router methods.
That works very well, except that ./manage.py syncdb does't respect those router settings.
When I syncdb my models, all of them are created in the default database.
The database router only provides an allow_syncdb method, but no sync_to method. Is there a way to tell the syncdb command where to create the new tables?
Note: I can't use the --database feature, as sometimes some of the model apps go to a different database than the rest of the app.
When you write your router make sure you've written the allow_syncdb() method. It takes both a database and a model. When you run manage.py syncdb you're essentially setting the --database=default. If you don't want your models to sync to the default database then your allow_syncdb() method should return False for the condition that db==default and model._meta.app_label==myapp.
You'll need to run syncdb with the --database=your_other_db option to get myapp into that db. But make sure in that case that allow_syncdb() returns True only for the case that db==your_other_db and model._meta.app_label==myapp.
Does that make sense? Basically you have to run the manage.py syncdb method twice, once for each database. You cannot run it only once and have it update both databases.