Django syncdb exception after updating to 1.4 - django

So I was developing an app in Django and needed a function from the 1.4 version so I decided to update.
But then a weird error appeared when I wanted to do syncdb
I am using the new manage.py and as You can see it makes some of the tables but then fails :
./manage.py syncdb
Creating tables ...
Creating table auth_permission
Creating table auth_group_permissions
Creating table auth_group
Creating table auth_user_user_permissions
Creating table auth_user_groups
Creating table auth_user
Creating table django_content_type
Creating table django_session
Creating table django_site
Traceback (most recent call last):
File "./manage.py", line 9, in <module>
execute_from_command_line(sys.argv)
File "/usr/local/lib/python2.7/dist-packages/Django-1.4-py2.7.egg/django/core/management/__init__.py", line 443, in execute_from_command_line
utility.execute()
File "/usr/local/lib/python2.7/dist-packages/Django-1.4-py2.7.egg/django/core/management/__init__.py", line 382, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/local/lib/python2.7/dist-packages/Django-1.4-py2.7.egg/django/core/management/base.py", line 196, in run_from_argv
self.execute(*args, **options.__dict__)
File "/usr/local/lib/python2.7/dist-packages/Django-1.4-py2.7.egg/django/core/management/base.py", line 232, in execute
output = self.handle(*args, **options)
File "/usr/local/lib/python2.7/dist-packages/Django-1.4-py2.7.egg/django/core/management/base.py", line 371, in handle
return self.handle_noargs(**options)
File "/usr/local/lib/python2.7/dist-packages/Django-1.4-py2.7.egg/django/core/management/commands/syncdb.py", line 91, in handle_noargs
sql, references = connection.creation.sql_create_model(model, self.style, seen_models)
File "/usr/local/lib/python2.7/dist-packages/Django-1.4-py2.7.egg/django/db/backends/creation.py", line 44, in sql_create_model
col_type = f.db_type(connection=self.connection)
TypeError: db_type() got an unexpected keyword argument 'connection'

I had the same issue, the definition for my custom field was missing the connection parameter.
from django.db import models
class BigIntegerField(models.IntegerField):
def db_type(self, connection):
return "bigint"

Although already old, answered and accepted question but I am adding my understanding I have added it because I am not using customized type and it is a Django Evolution error (but not syncdb)evolve --hint --execute. I think it may be helpful for someone in future. .
I am average in Python and new to Django. I also encounter same issue when I added some new features to my existing project. To add new feature I had to add some new fields of models.CharField() type,as follows.
included_domains = models.CharField(
"set of comma(,) seprated list of domains in target emails",
default="",
max_length=it_len.EMAIL_LEN*5)
excluded_domains = models.CharField(
"set of comma(,) seprated list of domains NOT in target emails",
default="",
max_length=it_len.EMAIL_LEN*5)
The Django version I am using is 1.3.1:
$ python -c "import django; print django.get_version()"
1.3.1 <--------# version
$python manage.py syncdb
Project signature has changed - an evolution is required
Django Evolution: Django Evolution is an extension to Django that allows you to track changes in your models over time, and to update the database to reflect those changes.
$ python manage.py evolve --hint
#----- Evolution for messagingframework
from django_evolution.mutations import AddField
from django.db import models
MUTATIONS = [
AddField('MessageConfiguration', 'excluded_domains', models.CharField, initial=u'', max_length=300),
AddField('MessageConfiguration', 'included_domains', models.CharField, initial=u'', max_length=300)
]
#----------------------
Trial evolution successful.
Run './manage.py evolve --hint --execute' to apply evolution.
The trial was susses and when I tried to apply changes in DB
$ python manage.py evolve --hint --execute
Traceback (most recent call last):
File "manage.py", line 25, in <module>
execute_manager(settings)
File "/var/www/sites/www.taxspanner.com/django/core/management/__init__.py", line 362, in execute_manager
utility.execute()
File "/var/www/sites/www.taxspanner.com/django/core/management/__init__.py", line 303, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/var/www/sites/www.taxspanner.com/django/core/management/base.py", line 195, in run_from_argv
self.execute(*args, **options.__dict__)
File "/var/www/sites/www.taxspanner.com/django/core/management/base.py", line 222, in execute
output = self.handle(*args, **options)
File "/usr/local/lib/python2.7/dist-packages/django_evolution-0.6.9.dev_r225-py2.7.egg/django_evolution/management/commands/evolve.py", line 60, in handle
self.evolve(*app_labels, **options)
File "/usr/local/lib/python2.7/dist-packages/django_evolution-0.6.9.dev_r225-py2.7.egg/django_evolution/management/commands/evolve.py", line 140, in evolve
database))
File "/usr/local/lib/python2.7/dist-packages/django_evolution-0.6.9.dev_r225-py2.7.egg/django_evolution/mutations.py", line 426, in mutate
return self.add_column(app_label, proj_sig, database)
File "/usr/local/lib/python2.7/dist-packages/django_evolution-0.6.9.dev_r225-py2.7.egg/django_evolution/mutations.py", line 438, in add_column
sql_statements = evolver.add_column(model, field, self.initial)
File "/usr/local/lib/python2.7/dist-packages/django_evolution-0.6.9.dev_r225-py2.7.egg/django_evolution/db/common.py", line 142, in add_column
f.db_type(connection=self.connection), # <=== here f is field class object
TypeError: db_type() got an unexpected keyword argument 'connection'
To understand this exception I check that this exception is something similar to:
>>> def f(a):
... print a
...
>>> f('b', b='a')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: f() got an unexpected keyword argument 'b'
>>>
So the function signature has been changed.
Because I have not added any new customized or enum fields but only two similar fields that was already in model and char type field is supported by most of database (I am ussing PostgreSQL) even I was getting this error!
Then I read from #: Russell Keith-Magee-4 Reply.
What you've hit here is the end of the deprecation cycle for code that
doesn't support multiple databases.
In Django 1.2, we introduced multiple database support; in order to
support this, the prototype for get_db_preb_lookup() and
get_db_prep_value() was changed.
For backwards compatibility, we added a shim that would transparently
'fix' these methods if they hadn't already been fixed by the
developer.
In Django 1.2, the usage of these shims raised a
PendingDeprecationWarning. In Django 1.3, they raised a
DeprecationWarning.
Under Django 1.4, the shim code was been removed -- so any code that
wasn't updated will now raise errors like the one you describe.
But I am not getting any DeprecationWarning warning assuming because of newer version of Django Evolution.
But from above quote I could understand that to support multiple databases function signature is added and an extra argument connection is needed. I also check the db_type() signature in my installation of Django as follows:
/django$ grep --exclude-dir=".svn" -n 'def db_type(' * -R
contrib/localflavor/us/models.py:8: def db_type(self):
contrib/localflavor/us/models.py:24: def db_type(self):
:
:
Ialso refer of Django documentation
Field.db_type(self, connection):
Returns the database column data type for the Field, taking into account the connection
object, and the settings associated with it.
And Then I could understand that to resolve this issue I have to inherited models.filed class and overwrite def db_type() function. And because I am using PostgreSQL in which to create 300 chars type field I need to return 'char(300)'. In my models.py I added:
class CharMaxlengthN(models.Field):
def db_type(self, connection):
return 'char(%d)' % self.max_length # because I am using postgresql
If you encounter similar problem please check your underline DB's manual that which type of column you need to create and return a string.
And changed the definition of new fields (that I need to add) read comments:
included_domains = CharMaxlengthN( # <--Notice change
"set of comma(,) seprated list of domains in target emails",
default="",
max_length=it_len.EMAIL_LEN*5)
excluded_domains = CharMaxlengthN( # <-- Notice change
"set of comma(,) seprated list of domains NOT in target emails",
default="",
max_length=it_len.EMAIL_LEN*5)
Then I executed same command that was failing previously:
t$ python manage.py evolve --hint --execute
You have requested a database evolution. This will alter tables
and data currently in the None database, and may result in
IRREVERSABLE DATA LOSS. Evolutions should be *thoroughly* reviewed
prior to execution.
Are you sure you want to execute the evolutions?
Type 'yes' to continue, or 'no' to cancel: yes
Evolution successful.
I also check my DB and tested my new added features It is now working perfectly, and no DB problem.
If you wants to create ENUM field read Specifying a mySQL ENUM in a Django model.
Edit: I realized instead of sub classing models.Field I should have inherit more specific subclass that is models.CharField.
Similarly I need to create Decimal DB fields so I added following class in model:
class DecimalField(models.DecimalField):
def db_type(self, connection):
d = {
'max_digits': self.max_digits,
'decimal_places': self.decimal_places,
}
return 'numeric(%(max_digits)s, %(decimal_places)s)' % d

Related

Error loading existing db data into Django (fixtures, postgresql)

Am trying to load some generated data into Django without disrupting the existing data in the site. What I have:
Saved the data as a valid JSON (validated here).
The JSON format matches the Django documentation. In previous attempts I also aligned it to the Django documentation here (slightly different field order, the result was the same).
Output errors I'm receiving are very generic and not helpful, even with verbosity=3.
The Error Prompt
Operations to perform:
Apply all migrations: workoutprogrammes
Running migrations:
Applying workoutprogrammes.0005_auto_20220415_2021...Loading '/Users/Robert/Desktop/Projects/Powerlifts/src/workoutprogrammes/fixtures/datafile_2' fixtures...
Checking '/Users/Robert/Desktop/Projects/Powerlifts/src/workoutprogrammes/fixtures' for fixtures...
Installing json fixture 'datafile_2' from '/Users/Robert/Desktop/Projects/Powerlifts/src/workoutprogrammes/fixtures'.
Traceback (most recent call last):
File "/Users/Robert/Desktop/Projects/Powerlifts/venv/lib/python3.8/site-packages/django/core/serializers/json.py", line 70, in Deserializer
yield from PythonDeserializer(objects, **options)
File "/Users/Robert/Desktop/Projects/Powerlifts/venv/lib/python3.8/site-packages/django/core/serializers/python.py", line 93, in Deserializer
Model = _get_model(d["model"])
KeyError: 'model'
The above exception was the direct cause of the following exception:... text continues on...
for obj in objects:
File "/Users/Robert/Desktop/Projects/Powerlifts/venv/lib/python3.8/site-packages/django/core/serializers/json.py", line 74, in Deserializer
raise DeserializationError() from exc
django.core.serializers.base.DeserializationError: Problem installing fixture '/Users/Robert/Desktop/Projects/Powerlifts/src/workoutprogrammes/fixtures/datafile_2.json':
auto_2022_migration.py file:
from django.db import migrations
from django.core.management import call_command
def db_migration(apps, schema_editor):
call_command('loaddata', '/filename.json', verbosity=3)
class Migration(migrations.Migration):
dependencies = [('workoutprogrammes', '0004_extable_delete_ex_table'),]
operations = [migrations.RunPython(db_migration),]
JSON file extract (start... end)
NB: all my PKs are UUIDs generated from postgresql
[{"pk":"af82d5f4-2814-4d52-b2b1-6add6cf18d3c","model":"workoutprogrammes.ex_table","fields":{"exercise_name":"Cable Alternating Front Raise","utility":"Auxiliary","mechanics":"Isolated","force":"Push","levator_scapulae":"Stabilisers",..."obliques":"Stabilisers","psoas_major":""}}]
I'm not sure what the original error was. I suspect it had to do with either editing the table/schema using psql separately from Django, OR using the somewhat outdated uuid-ossp package instead of pgcrypto package (native to Djagno via CryptoExtension()). I was never able to confirm. I did however manage to get it working by:
Deleting the table (schema, data) using psql and the app using Django. Creating a new app in Django.
Load the prev model data into this new_app/models.py. Make new migration file, first updating the CryptoExtension() operation before committing the migration.
Create Fixtures directory and relevant file (per Django specs, validated JSON online.
Validate my UUIDs online
Load the fixture data into the app. python3 manage.py loaddata /path/to/data/file_name.json

Django runserver failing during admin site checks

When running runserver command, we are getting the error below which makes breaks the runserver command:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/threading.py", line 926, in _bootstrap_inner
self.run()
File "/usr/local/lib/python3.7/threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "/usr/local/lib/python3.7/site-packages/django/utils/autoreload.py", line 54, in wrapper
fn(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/django/core/management/commands/runserver.py", line 117, in inner_run
self.check(display_num_errors=True)
File "/usr/local/lib/python3.7/site-packages/django/core/management/base.py", line 390, in check
include_deployment_checks=include_deployment_checks,
File "/usr/local/lib/python3.7/site-packages/django/core/management/base.py", line 377, in _run_checks
return checks.run_checks(**kwargs)
File "/usr/local/lib/python3.7/site-packages/django/core/checks/registry.py", line 72, in run_checks
new_errors = check(app_configs=app_configs)
File "/usr/local/lib/python3.7/site-packages/django/contrib/admin/checks.py", line 55, in check_admin_app
for site in all_sites:
File "/usr/local/lib/python3.7/_weakrefset.py", line 60, in __iter__
for itemref in self.data:
RuntimeError: Set changed size during iteration
This issue started to happen out of nowhere, since all the code related to AdminSite's wasn't modified in months. It's also not happening every time, but increasing in frequency of occurrence.
It doesn't seem to be related to the current implementation, since it's a django's runserver issue during the system checks, but here are the Implementation details:
class PaymentsAdminSite(AdminSite):
site_header = "Payments Admin Site"
site = PaymentsAdminSite(name="payments-admin")
#admin.register(models.Payment, site=site)
class PaymentsAdmin(admin.ModelAdmin):
def cancel(self, request, queryset):
pass
actions = (cancel,)
def get_queryset(self, request):
return models.Payment.objects.all()
# urls.py
url(r"^payments-admin/", site.urls)
Why is the error occurring:
When django runs it loads all of the apps and runs various checks. The default admin interface has various checks. The one that is failing is django.contrib.admin.checks.check_admin_app. It loops through all_sites and performs a check on each of these sites (AdminSites). The reason this error is being raised is that this set all_sites is being edited whilst it is being looped through (obviously a big no no).
How is it being edited? Well... all_sites refers to a WeakSet in admin.sites, and every time you instantiate an AdminSite, it adds itself to this WeakSet:
# django.contrib.admin.sites
all_sites = WeakSet()
class AdminSite:
def __init__(self, name='admin'):
...
all_sites.add(self)
Why is this happening?
I'm not 100% sure what is causing the inconsistency but I suspect the module is being loaded in a thread separate to the thread running the checks, and depending on which thread runs faster, the error occurs sometimes and not others.
How can I fix it?
Again, without a better idea of what is going on re. threads and what modules are getting imported when, I can't be 100% that this will fix the above, but I'll explain why I think it should work below. Put the following in your apps.py of the relevant app:
class PaymentsAdminSite(AdminSite):
site_header = "Payments Admin Site"
site = PaymentsAdminSite(name="payments-admin")
Why should it fix it
One of the first things Django does first is to register all of the apps in your INSTALLED_APPS. This involves importing all of the apps.py modules. After it has registered all of the apps, it will then register all of the models, and then it will call the ready method in each of the AppConfigs. It is the ready method in django.contrib.admin that is adding the check above, so hopefully by instantiating your SiteAdmin in an app.py file, it should be instantiated before the check is even added, let alone run.
Note you should only add the AdminSite to your apps.py, all of the registering of models to it, should stay in your admin.py since those models won't even be registered (to django) yet, at the point that apps.py is run.

Flask Migrate "ValueError: Constraint must have a name"

I have created a flask migrate script ,however, when I running the upgrade function, I get the following error:
INFO [alembic.runtime.migration] Context impl SQLiteImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
INFO [alembic.runtime.migration] Running upgrade -> 6378428b838a, empty message
Traceback (most recent call last):
File "migrate.py", line 22, in <module>
manager.run()
File "/Users/slatifi/git/StaffTrainingLog/venv/lib/python3.7/site-packages/flask_script/__init__.py", line 417, in run
result = self.handle(argv[0], argv[1:])
File "/Users/slatifi/git/StaffTrainingLog/venv/lib/python3.7/site-packages/flask_script/__init__.py", line 386, in handle
res = handle(*args, **config)
File "/Users/slatifi/git/StaffTrainingLog/venv/lib/python3.7/site-packages/flask_script/commands.py", line 216, in __call__
return self.run(*args, **kwargs)
File "/Users/slatifi/git/StaffTrainingLog/venv/lib/python3.7/site-packages/flask_migrate/__init__.py", line 95, in wrapped
f(*args, **kwargs)
File "/Users/slatifi/git/StaffTrainingLog/venv/lib/python3.7/site-packages/flask_migrate/__init__.py", line 280, in upgrade
command.upgrade(config, revision, sql=sql, tag=tag)
File "/Users/slatifi/git/StaffTrainingLog/venv/lib/python3.7/site-packages/alembic/command.py", line 298, in upgrade
script.run_env()
File "/Users/slatifi/git/StaffTrainingLog/venv/lib/python3.7/site-packages/alembic/script/base.py", line 489, in run_env
util.load_python_file(self.dir, "env.py")
File "/Users/slatifi/git/StaffTrainingLog/venv/lib/python3.7/site-packages/alembic/util/pyfiles.py", line 98, in load_python_file
module = load_module_py(module_id, path)
File "/Users/slatifi/git/StaffTrainingLog/venv/lib/python3.7/site-packages/alembic/util/compat.py", line 173, in load_module_py
spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "migrations/env.py", line 96, in <module>
run_migrations_online()
File "migrations/env.py", line 90, in run_migrations_online
context.run_migrations()
File "<string>", line 8, in run_migrations
File "/Users/slatifi/git/StaffTrainingLog/venv/lib/python3.7/site-packages/alembic/runtime/environment.py", line 846, in run_migrations
self.get_context().run_migrations(**kw)
File "/Users/slatifi/git/StaffTrainingLog/venv/lib/python3.7/site-packages/alembic/runtime/migration.py", line 518, in run_migrations
step.migration_fn(**kw)
File "/Users/slatifi/git/StaffTrainingLog/migrations/versions/6378428b838a_.py", line 23, in upgrade
batch_op.create_foreign_key(None, 'organisation', ['organisation'], ['id'])
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.7/lib/python3.7/contextlib.py", line 119, in __exit__
next(self.gen)
File "/Users/slatifi/git/StaffTrainingLog/venv/lib/python3.7/site-packages/alembic/operations/base.py", line 325, in batch_alter_table
impl.flush()
File "/Users/slatifi/git/StaffTrainingLog/venv/lib/python3.7/site-packages/alembic/operations/batch.py", line 106, in flush
fn(*arg, **kw)
File "/Users/slatifi/git/StaffTrainingLog/venv/lib/python3.7/site-packages/alembic/operations/batch.py", line 390, in add_constraint
raise ValueError("Constraint must have a name")
ValueError: Constraint must have a name
I have seen other people with the same error and they simply added render_as_batch to the env.py file. I did this but I still get the same error. Any thoughts?
Note: This is the modification I made in the env.py file:
with connectable.connect() as connection:
context.configure(
connection=connection,
target_metadata=target_metadata,
process_revision_directives=process_revision_directives,
**current_app.extensions['migrate'].configure_args,
render_as_batch=True
)
This is the upgrade script created by the migration
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision = '2838e3e96536'
down_revision = None
branch_labels = None
depends_on = None
def upgrade():
# ### commands auto generated by Alembic - please adjust! ###
with op.batch_alter_table('user', schema=None) as batch_op:
batch_op.add_column(sa.Column('organisation', sa.String(length=5), nullable=False))
batch_op.create_foreign_key(None, 'organisation', ['organisation'], ['id'])
# ### end Alembic commands ###
def downgrade():
# ### commands auto generated by Alembic - please adjust! ###
with op.batch_alter_table('user', schema=None) as batch_op:
batch_op.drop_constraint(None, type_='foreignkey')
batch_op.drop_column('organisation')
# ### en
d Alembic commands ###
This is normal because SQLite3 doesn't support ALTER tables.
You can pass the render_as_batch=True during the instantiation of Flask-Migrate like this :
migrate = Migrate(app,db,render_as_batch=True)
Don't need to modify the env file of Flask-Migrate.
In addition, to totally avoid the problem for your futures migrations, you can create a constraint naming templates for all types of constraints to the SQLAlchemy metadata, and then I think you will get consistent names.
See how to do this in the Flask-SQLAlchemy documentation. I'm copying the code example from the docs below for your convenience:
from sqlalchemy import MetaData
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
convention = {
"ix": 'ix_%(column_0_label)s',
"uq": "uq_%(table_name)s_%(column_0_name)s",
"ck": "ck_%(table_name)s_%(constraint_name)s",
"fk": "fk_%(table_name)s_%(column_0_name)s_%(referred_table_name)s",
"pk": "pk_%(table_name)s"
}
metadata = MetaData(naming_convention=convention)
db = SQLAlchemy(app, metadata=metadata)
The other answers show how to configure Flask to use named constraints in the future, but that doesn't solve the problem of dropping existing unnamed constraints in Alembic migrations. To handle any existing unnamed constraints, you need to also define the naming convention in the migration script, as explained in the Alembic docs.
For example, suppose you're trying to do a migration that adds ON DELETE CASCADE to an existing foreign key in your test table. If you've followed one of the other answers (e.g. https://stackoverflow.com/a/62651160/470844) and added the naming_convention to the Flask init script, then flask db upgrade will generate something like this:
def upgrade():
with op.batch_alter_table('test', schema=None) as batch_op:
batch_op.drop_constraint(None, type_='foreignkey')
batch_op.create_foreign_key(batch_op.f('fk_test_user_id_user'), 'user', ['user_id'], ['id'], ondelete='CASCADE')
Note that while the create_foreign_key call uses a constraint name (i.e. fk_test_user_id_user), the drop_constraint call still uses None as the constraint name, which will cause ValueError: Constraint must have a name error in this question's title.
To fix that, you need to edit the migration to use the naming_convention, and replace the None with the generated constraint name. For example, you'd change the above upgrade to:
naming_convention = {
"ix": 'ix_%(column_0_label)s',
"uq": "uq_%(table_name)s_%(column_0_name)s",
"ck": "ck_%(table_name)s_%(column_0_name)s",
"fk": "fk_%(table_name)s_%(column_0_name)s_%(referred_table_name)s",
"pk": "pk_%(table_name)s"
}
def upgrade():
with op.batch_alter_table('test', schema=None, naming_convention=naming_convention) as batch_op:
batch_op.drop_constraint('fk_test_user_id_user', type_='foreignkey')
batch_op.create_foreign_key(batch_op.f('fk_test_user_id_user'), 'user', ['user_id'], ['id'], ondelete='CASCADE')
(In this example, you'd also need to disable SQLite foreign key support while running the migration script, e.g. using the technique here, to avoid the batch migration itself triggering a cacading delete. The problem is documented here.)
Installing Flask-Migrate helps to avoid code references to declarative_base or modifying the env.py file.
For apps with app factory, split the metadata assignment and the db initiation between the extensions.py and __init__.py files.
extensions.py:
from sqlalchemy import MetaData
from flask_sqlalchemy import SQLAlchemy
metadata = MetaData(
naming_convention={
"ix": 'ix_%(column_0_label)s',
"uq": "uq_%(table_name)s_%(column_0_name)s",
"ck": "ck_%(table_name)s_%(constraint_name)s",
"fk": "fk_%(table_name)s_%(column_0_name)s_%(referred_table_name)s",
"pk": "pk_%(table_name)s"
}
)
db=SQLAlchemy(metadata=metadata)
__init__.py:
from flask import Flask, Blueprint
from config import ...
from flask_migrate import Migrate
from extensions import db
...
def create_app():
app = Flask(__name__,
template_folder='../app/templates')
app.config.from_object(config[...])
config[...].init_app(app)
db.init_app(app)
with app.app_context():
db.create_all()
migrate = Migrate(app, db)
...
from .templates.main import main
app.register_blueprint(main)
return app

SQLAlchemy: Specifying session to use for model

I am using Flask-SQLAlchemy and I need to create a session without auto-flushing for an operation. However, the default scoped session that is created by Flask-SQLAlchemy which is accessed using db.session has auto-flushing turned on.
I'm doing bulk updates for 100k rows, and the auto-flushing is causing severe performance issues.
I tried creating a new session with auto-flushing turned off using:
from flask_sqlalchemy import SignallingSession
db_session = SignallingSession(db, autoflush=False)
And then I tried creating a new row using:
from flask_sqlalchemy import SQLAlchemy, BaseQuery
db = SQLAlchemy(app)
class Tool(db.Model):
id = db.Column(db.Integer, primary_key=True)
title = db.Column(db.String(1024), nullable=True)
add_tool = Tool()
add_tool.title = title
db_session.add(add_tool)
However, doing this throws the following exception:
Traceback (most recent call last):
File "app", line 182, in <module>
called for last item
import_tools('x.json', 1)
File "app", line 173, in import_tools
x.add_to_database()
File "app", line 126, in add_to_database
title=metadata['title'], commit=True)
File "app", line 150, in transfer
self.db_session.add(import_tool)
File ".virtualenvs/project/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 1492, in add
self._save_or_update_state(state)
File ".virtualenvs/project/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 1504, in _save_or_update_state
self._save_or_update_impl(state)
File ".virtualenvs/project/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 1759, in _save_or_update_impl
self._save_impl(state)
File ".virtualenvs/project/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 1731, in _save_impl
self._attach(state)
File ".virtualenvs/project/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 1849, in _attach
state.session_id, self.hash_key))
sqlalchemy.exc.InvalidRequestError: Object '<Tool at 0x11361f4d0>' is already attached to session '1' (this is '2')
What I believe is happening is that when the object is created, it is attached to the global scoped session that is provided by Flask-SQLAlchemy, while I want it to be attached to the session that I created. I tried looking through the code and documentation for Flask-SQLAlchemy and SQLAlchemy, but I couldn't find either the source of the problem or the solution to it.
Is there some way I can specify which session to use when the object is created using the model class? Or is there something totally different that I am missing which would be causing this problem?
SQLAlchemy sessions provide a no_autoflush context manager. This will suspend any flushes until after you exit the block.
model1 = Model(name='spam')
db.session.add(model1) # This will flush
with db.session.no_autoflush:
model2 = Model()
db.session.add(model2) # This will not
model2.name = 'eggs'
db.session.commit()

How do I get syncdb db_table and app_label to play nicely together

I've got a model that looks something like this:
class HeiselFoo(models.Model):
title = models.CharField(max_length=250)
class Meta:
""" Meta """
app_label = "Foos"
db_table = u"medley_heiselfoo_heiselfoo"
And whenever I run my test suite, I get an error because Django isn't creating the tables for that model.
It appears to be an interaction between app_label and db_table -- as the test suite runs normally if db_table is set, but app_label isn't.
Here's a link to the full source code: http://github.com/cmheisel/heiselfoo
Here's the traceback from the test suite:
E
======================================================================
ERROR: test_truth (heiselfoo.tests.HeiselFooTests)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/chris/Code/heiselfoo/ve/lib/python2.6/site-packages/heiselfoo/tests.py", line 10, in test_truth
f.save()
File "/Users/chris/Code/heiselfoo/ve/lib/python2.6/site-packages/django/db/models/base.py", line 434, in save
self.save_base(using=using, force_insert=force_insert, force_update=force_update)
File "/Users/chris/Code/heiselfoo/ve/lib/python2.6/site-packages/django/db/models/base.py", line 527, in save_base
result = manager._insert(values, return_id=update_pk, using=using)
File "/Users/chris/Code/heiselfoo/ve/lib/python2.6/site-packages/django/db/models/manager.py", line 195, in _insert
return insert_query(self.model, values, **kwargs)
File "/Users/chris/Code/heiselfoo/ve/lib/python2.6/site-packages/django/db/models/query.py", line 1479, in insert_query
return query.get_compiler(using=using).execute_sql(return_id)
File "/Users/chris/Code/heiselfoo/ve/lib/python2.6/site-packages/django/db/models/sql/compiler.py", line 783, in execute_sql
cursor = super(SQLInsertCompiler, self).execute_sql(None)
File "/Users/chris/Code/heiselfoo/ve/lib/python2.6/site-packages/django/db/models/sql/compiler.py", line 727, in execute_sql
cursor.execute(sql, params)
File "/Users/chris/Code/heiselfoo/ve/lib/python2.6/site-packages/django/db/backends/sqlite3/base.py", line 200, in execute
return Database.Cursor.execute(self, query, params)
DatabaseError: no such table: medley_heiselfoo_heiselfoo
----------------------------------------------------------------------
Ran 1 test in 0.004s
FAILED (errors=1)
Creating test database 'default'...
No fixtures found.
medley_heiselfoo_heiselfoo
Destroying test database 'default'...
Ok, I'm a moron. You need to have an app in INSTALLED_APPS that matches your app_label or your tables won't get created.
You need to have an app in INSTALLED_APPS that matches your app_label or your tables won't get created.
In my case it doesn't work - when I change app name in INSTALLED_APPS on app_label wchich matches models Meta setting, but is different than app folder name, manage.py syncdb raise: "Error: No module named ...". I still have no idea how to sync models with database in case of using app_label and db_table in model Meta.
Caveat: this is not a direct answer to your question.
From looking at your code in github I find that you have defined the model inside models.py. And yet you have defined app_label inside your model's Meta. Is there a reason for doing this?
Django's documentation says that app_label is meant to be used
If a model exists outside of the standard models.py (for instance, if the app’s models are in submodules of myapp.models), the model must define which app it is part of:
Given that, I don't see any reason to define an app_label in your case. Hence my question. Perhaps you are better off without defining the app_label after all.