Django test to use Postgres extension without migration - django

I have an existing project that I want to start implementing test step. There are quite a few data migrations happened in the history and I don't want to spend the effort to make them run in test setup. So I have disabled 'migrations':
DATABASES['default']['TEST'] = {
'MIGRATE': False,
}
However this Postgres DB makes use of some extensions
class Meta:
verbose_name = 'PriceBook Cache'
verbose_name_plural = 'PriceBook Caches'
indexes = [
GinIndex(
name="pricebookcache_gin_trgm_idx",
fields=['itemid', 'itemdescription', 'manufacturerpartnumber', 'vendor_id'],
opclasses=['gin_trgm_ops', 'gin_trgm_ops', 'gin_trgm_ops', 'gin_trgm_ops']
),
]
Which resulted error when I run the test
psycopg2.errors.UndefinedObject: operator class "gin_trgm_ops" does not exist for access method "gin"
I have had a look at here https://docs.djangoproject.com/en/4.1/ref/databases/#migration-operation-for-adding-extensions which specifically said done via migration but which I have disabled.
An alternative is using template db but I don't really want to as this will be run automatically in gitlab using docker container and I don't want to maintain another fixture outside the project repo.
So is there a way to initialise the database without running the migrations or is it possible to make it run completely different migration just for the test?

Related

How to manage a stop or restart of a task from Django-based site?

I'll be running a script in a server which will automatically create model instances in a database. The idea is to use a infinite loop (e.g while True:) which will be endlessly creating instances until I somehow stop it.
I want to use Django to nicely check from my website how big my database is, and from there I want to stop or restart it.
What could be a good approach here?
I was thinking about Celery, but I don't know how would I don't have clear how to stop it and it kind of looks like an overkill. Any suggestion?
A simple solution is to have a class that saves to the db the name of the script and whether it should keep running:
class ScriptTracker():
name = models.Charfield()
keep_running = models.BooleanField()
Then your script would just check the db every loop to see if it should stop:
def my_script():
while True:
if not ScriptTracker.objects.get(name="my_script").keep_running:
# stop running
return
# creating an instance in the db
MyObject.objects.create(name="helloworld")
Create the ScriptTracker object
ScriptTracker.objects.create(name="my_script", keep_running=True)
Start your script running, could be done simple if script is built as a management command:
python manage.py my_script

Why does django create a migration file when we change the storage attribute of FileField, when the storage type is not stored in the database?

I don't want to create a migration file whenever I change the storage of the FileField. I am getting the storage class from settings.py and it is configurable.
settings.py
Storage = S3BotoStorage(bucket='example')
models.py
from django.conf import settings
class myModel(models.Model):
file = models.FileField(upload_to='', blank=True, storage=settings.Storage)
TLDR: It's an empty migration, it's harmless let it be reading any further or trying different things is probably just a waste of time
When ever you make a change to a model django has to make a migration because it needs to keep track of what changes have been made to a model over time. However that does not always mean that a modification will be made in the database. The migration produced here is an empty one. Your migration probably looks something like this and you will say, hay that's not empty!!
class Migration(migrations.Migration):
dependencies = [
('stackoverflow', '0010_jsonmodel'),
]
operations = [
migrations.AlterField(
model_name='jsonmodel',
name='jfield',
field=stackoverflow.models.MyJsonField(),
),
migrations.AlterField(
model_name='parent',
name='picture',
field=models.ImageField(storage=b'Bada', upload_to=b'/home/'),
),
]
But it is!! just do
./manage.py sqlmigrate <myapp> <migration_number>
And you will find that it does not produce any SQL! Quote from the manual as suggested by #sayse
Django will make migrations for any change to your models or fields -
even options that don’t affect the database - as the only way it can
reconstruct a field correctly is to have all the changes in the
history, and you might need those options in some data migrations
later on (for example, if you’ve set custom validators).
Here is my interpretation to your question - You want to use the storage class (which can be changed in future) specified in your settings.py to store the files.
Suppose you specify xyz storage class in your settings.py and run makemigrations. Django will create a migration file with storage attribute as the one you had specified in the settings.py.
Now if you change the storage class in the settings.py and do not run makemigrations and upload your file, your file will get uploaded to the new storage you specified in the settings file even if you do not run makemigrations.
Hope it helps.

Rails 4: run migrations as separate DB user

The situation I have is our normal Rails DB user has full ownership in order to run migrations.
However, we use a shared DB for development, so we can't run "destructive" DB tasks against the development DB, such as rake db:drop/reset/etc....
My thought is to create 2 DB users:
rails-service
rails-migrator
The service user is the "normal" web app user that connects to the DB when the app is live. This DB user would only have standard CRUD privileges but no dropping rights.
The migrator user is the "admin" user that is only used for running migrations. This DB user would have normal "full" access to the DB such that it "could" drop the DB if that command were executed.
Question: Is there a clean way to tell Rails migrations to run as the rails-migrator user? I'm not sure how I would accomplish this aside from somehow altering the connection strings for every rails migration file, which seems like a bad idea.
In tandem with the above, I'm going to "delete" the destructive rake tasks so that a developer can't even run them.
# lib/tasks/db.rake
# See: https://coderwall.com/p/jt4e1q/disable-destructive-rake-tasks-by-environment
tasks = Rake.application.instance_variable_get '#tasks'
tasks.delete 'db:reset'
tasks.delete 'db:drop'
namespace :db do
desc 'db:reset not available in this environment'
task :reset do
puts 'db:reset has been disabled'
end
desc 'db:drop not available in this environment'
task :drop do
puts 'db:drop has been disabled'
end
end
I refer you to the answer of Matthew Rudy Jacobs from 2007 (!) https://www.ruby-forum.com/topic/123618
Lucky enough it works also now :)
I just changed DEFINED? and the rest to ENV['AS_DB_ADMIN'] and used it to separate migration access to another user.
On migration I used
set :default_env, { as_db_admin: true }

Running tests with unmanaged tables in django

My django app works with tables that are not managed and have the following defined in my model like so:
class Meta:
managed = False
db_table = 'mytable'
When I run a simple test that imports the person, I get the following:
(person)bob#sh ~/person/dapi $ > python manage.py test
Creating test database for alias 'default'...
DatabaseError: (1060, "Duplicate column name 'db_Om_no'")
The tests.py is pretty simple like so:
import person.management.commands.dorecall
from person.models import Person
from django.test import TestCase
import pdb
class EmailSendTests(TestCase):
def test_send_email(self):
person = Person.objects.all()[0]
Command.send_email()
I did read in django docs where it says "For tests involving models with managed=False, it’s up to you to ensure the correct tables are created as part of the test setup.". So I understand that my problem is that I did not create the appropriate tables. So am I supposed to create a copy of the tables in the test_person db that the test framework created?
Everytime I run the tests, the test_person db gets destroyed (I think) and re-setup, so how am I supposed to create a copy of the tables in test_person. Am I thinking about this right?
Update:
I saw this question on SO and added the ManagedModelTestRunner() in utils.py. Though ManagedModelTestRunner() does get run (confirmed through inserting pbd.set_trace()), I still get the Duplicate column name error. I do not get errors when I do python manage.py syncdb (though this may not mean much since the tables are already created - will try removing the table and rerunning syncdb to see if I can get any clues).
I had the same issue, where I had an unmanaged legacy database that also had a custom database name set in the models meta property.
Running tests with a managed model test runner, as you linked to, solved half my problem, but I still had the problem of Django not knowing about the custom_db name:
django.db.utils.ProgrammingError: relation "custom_db" does not exist
The issue was that ./manage.py makemigrations still creates definitions of all models, managed or not, and includes your custom db names in the definition, which seems to blow up tests. By installing:
pip install django-test-without-migrations==0.2
and running tests like this:
./manage.py test --nomigrations
I was able to write tests against my unmanaged model without getting any errors.

How do I run a unit test against the production database?

How do I run a unit test against the production database instead of the test database?
I have a bug that's seems to occur on my production server but not on my development computer.
I don't care if the database gets trashed.
Is it feasible to make a copy the database, or part of the database that causes the problem? If you keep a backup server, you might be able to copy the data from there instead (make sure you have another backup, in case you messed the backup database).
Basically, you don't want to mess with live data and you don't want to be left with no backup in case you mess something up (and you will!).
Use manage.py dumpdata > mydata.json to get a copy of the data from your database.
Go to your local machine, copy mydata.json to a subdirectory of your app called fixtures e.g. myapp/fixtures/mydata.json and do:
manage.py syncdb # Set up an empty database
manage.py loaddata mydata.json
Your local database will be populated with data and you can test away.
Make a copy the database... It's really a good practices!!
Just execute the test, instead call commit, call rollback at the end of.
The first thing to try should be manually executing the test code on the shell, on the production server.
python manage.py shell
If that doesn't work, you may need to dump the production data, copy it locally and use it as a fixture for the testcase you are using.
If there is a way to ask django to use the standard database without creating a new one, I think rather than creating a fixture, you can do a sqldump which will generally be a much smaller file.
Short answer: you don't.
Long answer: you don't, you make a copy of the production database and run it there
If you really don't care about trashing the db, then Marco's answer of rolling back the transaction is my preferred choice as well. You could also try NdbUnit but I personally don't think the extra baggage it brings is worth the gains.
How do you test the test db now? By test db do you mean SQLite?
HTH,
Berryl
I have both a full-on-slow-django-test-db suite and a crazy-fast-runs-against-production test suite built from a common test module. I use the production suite for sanity checking my changes during development and as a commit validation step on my development machine. The django suite module looks like this:
import django.test
import my_test_module
...
class MyTests(django.test.TestCase):
def test_XXX(self):
my_test_module.XXX(self)
The production test suite module uses bare unittest and looks like this:
import unittest
import my_test_module
class MyTests(unittest.TestCase):
def test_XXX(self):
my_test_module.XXX(self)
suite = unittest.TestLoader().loadTestsFromTestCase(MyTests)
unittest.TextTestRunner(verbosity=2).run(suite)
The test module looks like this:
def XXX(testcase):
testcase.assertEquals('foo', 'bar')
I run the bare unittest version like this, so my tests in either case have the django ORM available to them:
% python manage.py shell < run_unit_tests
where run_unit_tests consists of:
import path.to.production_module
The production module needs a slightly different setUp() and tearDown() from the django version, and you can put any required table cleaning in there. I also use the django test client in the common test module by mimicking the test client class:
class FakeDict(dict):
"""
class that wraps dict and provides a getlist member
used by the django view request unpacking code, used when
passing in a FakeRequest (see below), only needed for those
api entrypoints that have list parameters
"""
def getlist(self, name):
return [x for x in self.get(name)]
class FakeRequest(object):
"""
an object mimicing the django request object passed in to views
so we can test the api entrypoints from the developer unit test
framework
"""
user = get_test_user()
GET={}
POST={}
Here's an example of a test module function that tests via the client:
def XXX(testcase):
if getattr(testcase, 'client', None) is None:
req_dict = FakeDict()
else:
req_dict = {}
req_dict['param'] = 'value'
if getattr(testcase, 'client', None) is None:
fake_req = FakeRequest()
fake_req.POST = req_dict
resp = view_function_to_test(fake_req)
else:
resp = testcase.client.post('/path/to/function_to_test/', req_dict)
...
I've found this structure works really well, and the super-speedy production version of the suite is a major time-saver.
If you database supports template databases, use the production database as a template database. Ensure that you Django database user has sufficient permissions.
If you are using PostgreSQL, you can easily do this specifying the name of your production database as POSTGIS_TEMPLATE(and use the PostGIS backend).