I use CIText from django.contrib.postgres, how can I carry on using SQlite with my unit tests ?
At the moment django dies trying to run my tests with:
django.db.utils.ProgrammingError: type "citext" does not exist
LINE 1: ...gmodel" ALTER COLUMN "name" TYPE citext USING "name"::citext
If you use database-specific features, your application becomes dependent on that particular database.
If the dependencies on the app using citext are minimal, you could create a separate settings file for testing that does not include this app.
That obviously limits your test coverage and is only suitable for quick local testing. On a CI system, you definitely want to use the database you are using in production anyways.
Related
I have a Django application that reads different CSV files and saves them to the same model/table in the DB.
While fixtures are certainly used for setting up a test environment quickly, I used the fixture to configure the different CSV Schemata that is subsequently parsed by the Django application.
So each of the data provider has their own distinct schema which is a different row in the CsvSchema table.
During code review it came up that this is bad style because --
It leads to duplication of data. I found it useful to pass such configurations via a fixture and treated it like a configuration file.
To further treat the fixture like a configuration file, I even put it inside the git repository, which is again somethong the reviewer agrees with.
The reviewer also claimed that fixtures should be use only once in the lifetime of the application, while setting it up initially.
To me, the fixtures are just a tool that Django provides us. I can play around with the schema details in my development machine, and then dump them into a fixture, effectively using it as configuration. Am I playing too hard and fast with the rules here?
I'm tyring to make unit tests for my node application. I'm using a postgresql database for the development and SQLite for the tests. However Sqlite does not understand some features of postgresql such as to_tsvectorand sometimes I got a problem of SQLITE databse locked. So I think for a server to test the application on local and on build sever. Is it a good solution to do that? I found some alternatives that mention to use docker container
testing with docker.
So what are the suitable solution to run postgres test on local and server build without getting problem of databse lock?
I would avoid using the database in the unit tests as they are now dependant on an external system:
Part of being a unit test is the implication that things outside the
code under test are mocked or stubbed out. Unit tests shouldn't have
dependencies on outside systems. They test internal consistency as
opposed to proving that they play nicely with some outside system.
Basically, mock any calls to the database so you don't need to use one.
However, If you really must use a Postgres database you should use the official image in a compose file and initialize it with your schema. You can then connect to that with your tests in a known state, etc.
As for the Database lock, it may disappear after using Postgres instead of SQLite and you may want to check if you have any concurrency in your tests.
I want to speed up my tests by running only necessary migrations to build database. Currently, Django runs all of them.
I know how to specify dependencies among migrations and actively use it. I watch for dependencies, most of my tests don't depend even on Django. I want some dividends for that.
Is there any way to specify particular apps or migrations on which test depends?
I found available_apps property of TransactionTestCase, which is marked as private API, but it doesn't work for me. Even if I run single test class or test method.
I have a django project that uses a postgres db with a number of UDFs. The UDFs are written in plpythonu. Since plpyhtonu is an untrusted language, only database super users can use it to to create UDFs. This poses a serious problem in that I have no way of programmatically creating them within the test suite. Right now I see two options:
Modify django.db.backends.creation._create_test_db to create the test db from a template, which has my UDFs already loaded. This seems hacky and laborious to maintain.
Create a super user with MD5 authentication and load the UDFs in the test suite with psycopg2. This seems unsecure.
Are there less terrible ways I can do this?
Thanks.
I don't know the PG UDF model, only the MySQL and SQLite ones. A few other work-arounds might be:
Create a PG instance which you use just for testing, isolated so that potential security problems won't be a concern. Databases are cheap after all.
If the UDFs are simple (or the test data size makes them simple) then recreate them in SQLite and use that database for testing. This assumes that you don't need to test the plpython functionality as part of the project.
If the UDF functionality is much more stable than the rest of the project, then split the UDF code into its own subproject, and for the rest of the code assume the UDFs will be present. At the very least this will make it so most of the code can be automated, and only the UDF code needs manual intervention to specify the password.
I went with #1. It's not ideal but it works ok.
I want to define model only for using in my test suite. It would be nice, to not create it's table on production. Is there any variable I can test agains to check if I'm in test mode?
If you're running your tests using the Django testing framework (python manage.py test) then it will automatically create all of the tables for your models in a completely different database, and then populate those tables from your application fixtures, prior to running your tests. Once the tests have completed, the database will be dropped. (If your production database is named foo, the test database will be named foo_test, unless you specify differently.)
If you have models that you need only for tests, then all you have to do is to place your test models in the same directory structure as your test code, instead of intermingled with your production models. That will ensure that they are not inadvertently mixed into your production database.
If you use at recent version of Django(I can confirm versions since 1.4 until 1.6), and uses django.test, you could put all your test models definitions in tests/__init__.py. This way, you will have test models in unit tests, without them polluting the production database.