Django celery worker __init__() error - django

The tech stack looks like this:
Django frontend and backend
Celery worker queue for asynchronously processing time consuming tasks
Within the past day or two I've noticed a lot of these kinds of stack traces both locally and in production:
[2012-07-05 20:31:01,583: CRITICAL/MainProcess] Task site_endpoint.tasks.async_inbound_message[a950736c-ff93-420c-9fbf-6deb2b88ff4d] INTERNAL ERROR: TypeError('__init__() takes at least 3 arguments (1 given)',)
Traceback (most recent call last):
File "/projects/site/venv/lib/python2.7/site-packages/celery/execute/trace.py", line 192, in trace_task
R = I.handle_error_state(task, eager=eager)
File "/projects/site/venv/lib/python2.7/site-packages/celery/execute/trace.py", line 91, in handle_error_state
}[self.state](task, store_errors=store_errors)
File "/projects/site/venv/lib/python2.7/site-packages/celery/execute/trace.py", line 114, in handle_failure
task.backend.mark_as_failure(req.id, exc, self.strtb)
File "/projects/site/venv/lib/python2.7/site-packages/celery/backends/base.py", line 96, in mark_as_failure
traceback=traceback)
File "/projects/site/venv/lib/python2.7/site-packages/celery/backends/base.py", line 229, in store_result
return self._store_result(task_id, result, status, traceback, **kwargs)
File "/projects/site/venv/lib/python2.7/site-packages/djcelery/backends/database.py", line 26, in _store_result
traceback=traceback)
File "/projects/site/venv/lib/python2.7/site-packages/djcelery/managers.py", line 40, in _inner
return fun(*args, **kwargs)
File "/projects/site/venv/lib/python2.7/site-packages/djcelery/managers.py", line 164, in store_result
"traceback": traceback})
File "/projects/site/venv/lib/python2.7/site-packages/djcelery/managers.py", line 82, in update_or_create
return self.get_query_set().update_or_create(**kwargs)
File "/projects/site/venv/lib/python2.7/site-packages/djcelery/managers.py", line 66, in update_or_create
obj, created = self.get_or_create(**kwargs)
File "/projects/site/venv/lib/python2.7/site-packages/django/db/models/query.py", line 385, in get_or_create
obj.save(force_insert=True, using=self.db)
File "/projects/site/venv/lib/python2.7/site-packages/django/db/models/base.py", line 460, in save
self.save_base(using=using, force_insert=force_insert, force_update=force_update)
File "/projects/site/venv/lib/python2.7/site-packages/django/db/models/base.py", line 543, in save_base
for f in meta.local_fields if not isinstance(f, AutoField)]
File "/projects/site/venv/lib/python2.7/site-packages/django/db/models/fields/subclassing.py", line 28, in inner
return func(*args, **kwargs)
File "/projects/site/venv/lib/python2.7/site-packages/django/db/models/fields/subclassing.py", line 28, in inner
return func(*args, **kwargs)
File "/projects/site/venv/lib/python2.7/site-packages/django/db/models/fields/__init__.py", line 276, in get_db_prep_save
return self.get_db_prep_value(value, connection=connection, prepared=False)
File "/projects/site/venv/lib/python2.7/site-packages/django/db/models/fields/subclassing.py", line 53, in inner
return func(*args, **kwargs)
File "/projects/site/venv/lib/python2.7/site-packages/picklefield/fields.py", line 154, in get_db_prep_value
value = force_unicode(dbsafe_encode(value, self.compress, self.protocol))
File "/projects/site/venv/lib/python2.7/site-packages/picklefield/fields.py", line 57, in dbsafe_encode
value = b64encode(dumps(deepcopy(value), pickle_protocol))
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 190, in deepcopy
y = _reconstruct(x, rv, 1, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 328, in _reconstruct
args = deepcopy(args, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 163, in deepcopy
y = copier(x, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 237, in _deepcopy_tuple
y.append(deepcopy(a, memo))
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 190, in deepcopy
y = _reconstruct(x, rv, 1, memo)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy.py", line 329, in _reconstruct
y = callable(*args)
TypeError: __init__() takes at least 3 arguments (1 given)
Looking at the database calls when this happens, I can see the following queries being executed:
LOG: statement: SELECT "celery_taskmeta"."id", "celery_taskmeta"."task_id", "celery_taskmeta"."status", "celery_taskmeta"."result", "celery_taskmeta"."date_done", "celery_taskmeta"."traceback" FROM "celery_taskmeta" WHERE "celery_taskmeta"."task_id" = 'a950736c-ff93-420c-9fbf-6deb2b88ff4d'
LOG: statement: SAVEPOINT s140735259576672_x4
LOG: statement: ROLLBACK
LOG: statement: BEGIN
LOG: statement: SELECT "celery_taskmeta"."id", "celery_taskmeta"."task_id", "celery_taskmeta"."status", "celery_taskmeta"."result", "celery_taskmeta"."date_done", "celery_taskmeta"."traceback" FROM "celery_taskmeta" WHERE "celery_taskmeta"."task_id" = 'a950736c-ff93-420c-9fbf-6deb2b88ff4d'
LOG: statement: SAVEPOINT s140735259576672_x5
LOG: statement: ROLLBACK
LOG: statement: BEGIN
LOG: statement: SELECT "celery_taskmeta"."id", "celery_taskmeta"."task_id", "celery_taskmeta"."status", "celery_taskmeta"."result", "celery_taskmeta"."date_done", "celery_taskmeta"."traceback" FROM "celery_taskmeta" WHERE "celery_taskmeta"."task_id" = 'a950736c-ff93-420c-9fbf-6deb2b88ff4d'
LOG: statement: SAVEPOINT s140735259576672_x6
I am having a tough time understanding what the source of this invalid object initialization is. Anyone have ideas?

It looks like you are using new-relic remote performance monitoring/forensic analysis. I have seen this kind of problem before when old versions of the library are being used. I recommend checking to make sure you are using the latest version of their client library.

This happened to me and I solved it by reading Python multiprocessing pool hangs at join? and http://bugs.python.org/issue9400.
The problem:
Celery somehow pickles the exceptions it gets and sends them somewhere (to the database/broker, I guess). So, if, at any time, a Celery task raises an exception that isn't pickleable, this bug will happen.
The solution:
You must ensure you're handling all the bizarre exceptions your Celery tasks code might be raising. The Traceback can give you hints, but it may not be very exact about everything.
If you really don't know where are your exceptions coming from, try putting your tasks code inside try...except: pass blocks.

Related

MWAA Trigger DAG Issue with POST request

I have a problem when I try to execute multiple Tasks within MWAA using POST Requests. I have been using mw1.small tier of MWAA and I schedule around 3 tasks per minute with EventBridge and Lambda. When I see my results I find that some tasks are missing and when I search for logs, I noticed that the Task was triggered but It was never scheduled or queued, and it does not appear on the tree or graph view.
I have 169 rules created on Event Bridge running a certain time everyday and I only see around 165 to 166 executions of the DAG. It is not a problem from Event Bridge or Lambda. I checked the logs for those services and all 169 DAG invocations are working fine.
The lambda function that I mentioned before triggers the DAG using a POST Request for every rule that I have on Event Bridge.
These are my configuration options that I have set.
celery.pool=1
celery.worker_autoscale=1,1
core.dag_file_processor_timeout=150
core.dagbag_import_timeout=90
core.killed_task_cleanup_time=604800
core.min_serialized_dag_update_interval=60
scheduler.dag_dir_list_interval=300
scheduler.min_file_process_interval=300
scheduler.parsing_processes=1
scheduler.processor_poll_interval=60
scheduler.schedule_after_task_execution=false
NOTE: I know I can use Step Functions but this is not an option in my case.
EDIT:
This problem is caused because I have multiple parallel requests made from the lambda function. Airflow 2.2.2 uses dag_id and execution_date as a primary key for the table dag_run.
The two types of traceback that I found are:
/usr/local/airflow/.local/lib/python3.7/site-packages/airflow/api/common/experimental/trigger_dag.py:91 DeprecationWarning: Calling `DAG.create_dagrun()` without an explicit data interval is deprecated
Traceback (most recent call last):
File "/usr/local/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1277, in _execute_context
cursor, statement, parameters, context
File "/usr/local/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/default.py", line 608, in do_execute
cursor.execute(statement, parameters)
psycopg2.errors.UniqueViolation: duplicate key value violates unique constraint "dag_run_dag_id_execution_date_key"
DETAIL: Key (dag_id, execution_date)=(test_dag, 2023-02-16 20:19:55+00) already exists.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/airflow/.local/bin/airflow", line 8, in <module>
sys.exit(main())
File "/usr/local/airflow/.local/lib/python3.7/site-packages/airflow/__main__.py", line 48, in main
args.func(args)
File "/usr/local/airflow/.local/lib/python3.7/site-packages/airflow/cli/cli_parser.py", line 48, in command
return func(*args, **kwargs)
File "/usr/local/airflow/.local/lib/python3.7/site-packages/airflow/utils/cli.py", line 92, in wrapper
return f(*args, **kwargs)
File "/usr/local/airflow/.local/lib/python3.7/site-packages/airflow/cli/commands/dag_command.py", line 138, in dag_trigger
dag_id=args.dag_id, run_id=args.run_id, conf=args.conf, execution_date=args.exec_date
File "/usr/local/airflow/.local/lib/python3.7/site-packages/airflow/api/client/local_client.py", line 30, in trigger_dag
dag_id=dag_id, run_id=run_id, conf=conf, execution_date=execution_date
File "/usr/local/airflow/.local/lib/python3.7/site-packages/airflow/api/common/experimental/trigger_dag.py", line 125, in trigger_dag
replace_microseconds=replace_microseconds,
File "/usr/local/airflow/.local/lib/python3.7/site-packages/airflow/api/common/experimental/trigger_dag.py", line 91, in _trigger_dag
dag_hash=dag_bag.dags_hash.get(dag_id),
File "/usr/local/airflow/.local/lib/python3.7/site-packages/airflow/utils/session.py", line 70, in wrapper
return func(*args, session=session, **kwargs)
File "/usr/local/airflow/.local/lib/python3.7/site-packages/airflow/models/dag.py", line 2359, in create_dagrun
session.flush()
File "/usr/local/airflow/.local/lib/python3.7/site-packages/sqlalchemy/orm/session.py", line 2540, in flush
self._flush(objects)
File "/usr/local/airflow/.local/lib/python3.7/site-packages/sqlalchemy/orm/session.py", line 2682, in _flush
transaction.rollback(_capture_exception=True)
File "/usr/local/airflow/.local/lib/python3.7/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__
with_traceback=exc_tb,
File "/usr/local/airflow/.local/lib/python3.7/site-packages/sqlalchemy/util/compat.py", line 182, in raise_
raise exception
File "/usr/local/airflow/.local/lib/python3.7/site-packages/sqlalchemy/orm/session.py", line 2642, in _flush
flush_context.execute()
File "/usr/local/airflow/.local/lib/python3.7/site-packages/sqlalchemy/orm/unitofwork.py", line 422, in execute
rec.execute(self)
File "/usr/local/airflow/.local/lib/python3.7/site-packages/sqlalchemy/orm/unitofwork.py", line 589, in execute
uow,
File "/usr/local/airflow/.local/lib/python3.7/site-packages/sqlalchemy/orm/persistence.py", line 245, in save_obj
insert,
File "/usr/local/airflow/.local/lib/python3.7/site-packages/sqlalchemy/orm/persistence.py", line 1136, in _emit_insert_statements
statement, params
File "/usr/local/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1011, in execute
return meth(self, multiparams, params)
File "/usr/local/airflow/.local/lib/python3.7/site-packages/sqlalchemy/sql/elements.py", line 298, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
File "/usr/local/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1130, in _execute_clauseelement
distilled_params,
File "/usr/local/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1317, in _execute_context
e, statement, parameters, cursor, context
File "/usr/local/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1511, in _handle_dbapi_exception
sqlalchemy_exception, with_traceback=exc_info[2], from_=e
File "/usr/local/airflow/.local/lib/python3.7/site-packages/sqlalchemy/util/compat.py", line 182, in raise_
raise exception
File "/usr/local/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1277, in _execute_context
cursor, statement, parameters, context
File "/usr/local/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/default.py", line 608, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.IntegrityError: (psycopg2.errors.UniqueViolation) duplicate key value violates unique constraint "dag_run_dag_id_execution_date_key"
DETAIL: Key (dag_id, execution_date)=(test_dag, 2023-02-16 20:19:55+00) already exists.
[SQL: INSERT INTO dag_run (dag_id, queued_at, execution_date, start_date, end_date, state, run_id, creating_job_id, external_trigger, run_type, conf, data_interval_start, data_interval_end, last_scheduling_decision, dag_hash) VALUES (%(dag_id)s, %(queued_at)s, %(execution_date)s, %(start_date)s, %(end_date)s, %(state)s, %(run_id)s, %(creating_job_id)s, %(external_trigger)s, %(run_type)s, %(conf)s, %(data_interval_start)s, %(data_interval_end)s, %(last_scheduling_decision)s, %(dag_hash)s) RETURNING dag_run.id]
[parameters: {'dag_id': 'test_dag', 'queued_at': datetime.datetime(2023, 2, 16, 20, 19, 56, 168249, tzinfo=Timezone('UTC')), 'execution_date': DateTime(2023, 2, 16, 20, 19, 55, tzinfo=Timezone('UTC')), 'start_date': None, 'end_date': None, 'state': <TaskInstanceState.QUEUED: 'queued'>, 'run_id': 'test22__2023-02-16T20:19:03+602430', 'creating_job_id': None, 'external_trigger': True, 'run_type': <DagRunType.MANUAL: 'manual'>, 'conf': <psycopg2.extensions.Binary object at 0x7fe5917cc900>, 'data_interval_start': DateTime(2023, 2, 16, 20, 19, 55, tzinfo=Timezone('UTC')), 'data_interval_end': DateTime(2023, 2, 16, 20, 19, 55, tzinfo=Timezone('UTC')), 'last_scheduling_decision': None, 'dag_hash': 'a1c4fce80be1afad038a0ccd8a41efcf'}]
(Background on this error at: http://sqlalche.me/e/13/gkpj)
and
Traceback (most recent call last):
File "/usr/local/airflow/.local/bin/airflow", line 8, in <module>
sys.exit(main())
File "/usr/local/airflow/.local/lib/python3.7/site-packages/airflow/__main__.py", line 48, in main
args.func(args)
File "/usr/local/airflow/.local/lib/python3.7/site-packages/airflow/cli/cli_parser.py", line 48, in command
return func(*args, **kwargs)
File "/usr/local/airflow/.local/lib/python3.7/site-packages/airflow/utils/cli.py", line 92, in wrapper
return f(*args, **kwargs)
File "/usr/local/airflow/.local/lib/python3.7/site-packages/airflow/cli/commands/dag_command.py", line 138, in dag_trigger
dag_id=args.dag_id, run_id=args.run_id, conf=args.conf, execution_date=args.exec_date
File "/usr/local/airflow/.local/lib/python3.7/site-packages/airflow/api/client/local_client.py", line 30, in trigger_dag
dag_id=dag_id, run_id=run_id, conf=conf, execution_date=execution_date
File "/usr/local/airflow/.local/lib/python3.7/site-packages/airflow/api/common/experimental/trigger_dag.py", line 125, in trigger_dag
replace_microseconds=replace_microseconds,
File "/usr/local/airflow/.local/lib/python3.7/site-packages/airflow/api/common/experimental/trigger_dag.py", line 75, in _trigger_dag
f"A Dag Run already exists for dag id {dag_id} at {execution_date} with run id {run_id}"
airflow.exceptions.DagRunAlreadyExists: A Dag Run already exists for dag id test_dag at 2023-02-16 20:20:23+00:00 with run id test21__2023-02-16T20:20:04+061773

test_database() does not use the database given

I'm trying to test my models using peewee's test_database which is supposed to use the database passed when executing the SQL in the context block. However running the test, I noticed that the production database is always used, which should not be the case.
Here's the exception I get:
======================================================================
ERROR: test_Admin (__main__.DatabaseTestSuite)
----------------------------------------------------------------------
Traceback (most recent call last):
File "db/db_test.py", line 14, in test_Admin
with test_database(test_db, (Admin), create_tables=True):
File "/usr/local/lib/python2.7/dist-packages/playhouse/test_utils.py", line 21, in __enter__
for m in self.models:
File "/usr/local/lib/python2.7/dist-packages/peewee.py", line 4723, in __iter__
return iter(self.select())
File "/usr/local/lib/python2.7/dist-packages/peewee.py", line 3149, in __iter__
return iter(self.execute())
File "/usr/local/lib/python2.7/dist-packages/peewee.py", line 3142, in execute
self._qr = ResultWrapper(model_class, self._execute(), query_meta)
File "/usr/local/lib/python2.7/dist-packages/peewee.py", line 2826, in _execute
return self.database.execute_sql(sql, params, self.require_commit)
File "/usr/local/lib/python2.7/dist-packages/peewee.py", line 3683, in execute_sql
self.commit()
File "/usr/local/lib/python2.7/dist-packages/peewee.py", line 3507, in __exit__
reraise(new_type, new_type(*exc_args), traceback)
File "/usr/local/lib/python2.7/dist-packages/peewee.py", line 3676, in execute_sql
cursor.execute(sql, params or ())
File "/usr/local/lib/python2.7/dist-packages/pymysql/cursors.py", line 166, in execute
result = self._query(query)
File "/usr/local/lib/python2.7/dist-packages/pymysql/cursors.py", line 322, in _query
conn.query(q)
File "/usr/local/lib/python2.7/dist-packages/pymysql/connections.py", line 835, in query
self._affected_rows = self._read_query_result(unbuffered=unbuffered)
File "/usr/local/lib/python2.7/dist-packages/pymysql/connections.py", line 1019, in _read_query_result
result.read()
File "/usr/local/lib/python2.7/dist-packages/pymysql/connections.py", line 1302, in read
first_packet = self.connection._read_packet()
File "/usr/local/lib/python2.7/dist-packages/pymysql/connections.py", line 981, in _read_packet
packet.check_error()
File "/usr/local/lib/python2.7/dist-packages/pymysql/connections.py", line 393, in check_error
err.raise_mysql_exception(self._data)
File "/usr/local/lib/python2.7/dist-packages/pymysql/err.py", line 107, in raise_mysql_exception
raise errorclass(errno, errval)
ProgrammingError: (1146, u"Table 'db.admin' doesn't exist")
----------------------------------------------------------------------
Here's the test code:
from db import *
import unittest
from playhouse.test_utils import test_database
test_db = MySQLDatabase('testdb',
user='testuser',
password='testpass')
class DatabaseTestSuite(unittest.TestCase):
def test_Admin(self):
with test_database(test_db, (Admin), create_tables=True):
Admin.create(username="testuser",
email="testuser#email.com")
result = Admin.select().where(Admin.user == "testuser")
unittest.assertIsNotNone(result)
if __name__ == '__main__':
unittest.main()
I've opened an issue on the github page, but received no help so far. You can find the issue here. It should provide extra details if needed. Any help would be appreciated, thanks.
The model tuple has a single item and thus requires a trailing comma. Changing this
with test_database(test_db, (Admin), create_tables=True):
to this
with test_database(test_db, (Admin,), create_tables=True):
resolved it.

Celery SchedulingError: an integer is required

I'm using Celery on Heroku with Redis as my broker. I've tried RabbitMQ as a broker as well, but keep getting the following error when trying to run a scheduled task:
Traceback (most recent call last):
File "/app/.heroku/python/lib/python2.7/site-packages/celery/beat.py", line 203, in maybe_due
result = self.apply_async(entry, publisher=publisher)
File "/app/.heroku/python/lib/python2.7/site-packages/celery/beat.py", line 259, in apply_async
entry, exc=exc)), sys.exc_info()[2])
File "/app/.heroku/python/lib/python2.7/site-packages/celery/beat.py", line 251, in apply_async
**entry.options)
File "/app/.heroku/python/lib/python2.7/site-packages/celery/app/task.py", line 555, in apply_async
**dict(self._get_exec_options(), **options)
File "/app/.heroku/python/lib/python2.7/site-packages/celery/app/base.py", line 347, in send_task
with self.producer_or_acquire(producer) as P:
File "/app/.heroku/python/lib/python2.7/site-packages/celery/app/base.py", line 402, in producer_or_acquire
producer, self.amqp.producer_pool.acquire, block=True,
File "/app/.heroku/python/lib/python2.7/site-packages/celery/app/amqp.py", line 492, in producer_pool
self.app.pool,
File "/app/.heroku/python/lib/python2.7/site-packages/celery/app/base.py", line 608, in pool
self._pool = self.connection().Pool(limit=limit)
File "/app/.heroku/python/lib/python2.7/site-packages/kombu/connection.py", line 612, in Pool
return ConnectionPool(self, limit, preload)
File "/app/.heroku/python/lib/python2.7/site-packages/kombu/connection.py", line 987, in __init__
preload=preload)
File "/app/.heroku/python/lib/python2.7/site-packages/kombu/connection.py", line 833, in __init__
self.setup()
File "/app/.heroku/python/lib/python2.7/site-packages/kombu/connection.py", line 1011, in setup
for i in range(self.limit):
SchedulingError: Couldn't apply scheduled task my_task: an integer is required
This is how my task is written:
#app.task(ignore_result=True)
def my_task():
do_something()
Any ideas what's going on?
It just occurred to me what was going on. In my settings file, I had the following line:
BROKER_POOL_LIMIT = os.environ.get('BROKER_POOL_LIMIT', 1)
I should have forced that to be an integer:
BROKER_POOL_LIMIT = int(os.environ.get('BROKER_POOL_LIMIT', 1))

sqlite3 back end DB for django - weird error

So for a project I am querying a bunch of Google Analytics data and storing it into a Sqlit3 database. Right now I am working on a script that retrieves historical data for each day over the course of several months. The Script runs without error for around a month and then throws this weird error that I couldn't find much information on. Can anyone help me figure out why it's throwing this. Is it some sort of memory error?
here is the traceback:
Traceback (most recent call last):
File "manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "C:\Python27\lib\site-packages\django\core\management\__init__.py", line
399, in execute_from_command_line
utility.execute()
File "C:\Python27\lib\site-packages\django\core\management\__init__.py", line
392, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "C:\Python27\lib\site-packages\django\core\management\__init__.py", line
272, in fetch_command
klass = load_command_class(app_name, subcommand)
File "C:\Python27\lib\site-packages\django\core\management\__init__.py", line
75, in load_command_class
module = import_module('%s.management.commands.%s' % (app_name, name))
File "C:\Python27\lib\site-packages\django\utils\importlib.py", line 40, in im
port_module
__import__(name)
File "C:\Users\root.BrandonYates-PC\Documents\GitHub\Weblife-Repo\bipol_django
_site\ga\management\commands\backlog.py", line 10, in <module>
class backLog():
File "C:\Users\root.BrandonYates-PC\Documents\GitHub\Weblife-Repo\bipol_django
_site\ga\management\commands\backlog.py", line 42, in backLog
p = Populate(str(date.fromordinal(i)), str(date.fromordinal(i)))
File "C:\Users\root.BrandonYates-PC\Documents\GitHub\Weblife-Repo\bipol_django
_site\ga\management\commands\fillTest.py", line 126, in __init__
PageTrackingData())
File "C:\Users\root.BrandonYates-PC\Documents\GitHub\Weblife-Repo\bipol_django
_site\ga\management\commands\fillTest.py", line 56, in fill
element.save()
File "C:\Python27\lib\site-packages\django\db\models\base.py", line 545, in sa
ve
force_update=force_update, update_fields=update_fields)
File "C:\Python27\lib\site-packages\django\db\models\base.py", line 573, in sa
ve_base
updated = self._save_table(raw, cls, force_insert, force_update, using, upda
te_fields)
File "C:\Python27\lib\site-packages\django\db\transaction.py", line 319, in __
exit__
connection.rollback()
File "C:\Python27\lib\site-packages\django\db\backends\__init__.py", line 180,
in rollback
self._rollback()
File "C:\Python27\lib\site-packages\django\db\backends\__init__.py", line 144,
in _rollback
return self.connection.rollback()
File "C:\Python27\lib\site-packages\django\db\utils.py", line 99, in __exit__
six.reraise(dj_exc_type, dj_exc_value, traceback)
File "C:\Python27\lib\site-packages\django\db\backends\__init__.py", line 144,
in _rollback
return self.connection.rollback()
django.db.utils.OperationalError: cannot rollback - no transaction is active
Relevant Code sections:
for i in range(d1, d2):
print "Date Range: " + str(date.fromordinal(i))
p = Populate(str(date.fromordinal(i)), str(date.fromordinal(i)))
#inside populate
self.fill(ecommerce, EcommerceData())
self.fill(self.query.get_data(qsf.getPageTrackingMetrics(),
qsf.getPageTrackingDimensions()),
PageTrackingData())
self.fill(self.query.get_data(qsf.getTrafficSourceMetrics(),
qsf.getTrafficSourceDimensions()),
TrafficData())
self.fill(self.query.get_data(qsf.getAdwordsMetrics(),
qsf.getAdwordsDimensions()), AdwordsData())
def fill(self, data, object):
"""
Create an object list
Iterate through the row data, create and append each object to the list
Add the analytics data key to the user data, insert the row by saving
"""
rows = []
for row in data:
#print row
rows.append( object.create( row ) )
for element in rows:
element.analytics = self.analytic
#print element
element.save()
Ok so it seems to be some sort of memory related error. If you are receiving this error you may be running too many variables through ram. In short you need to make your code more efficient for the data you are handling or slice up the data into more manageable chunks.

AttributeError: 'int' object has no attribute '_compiler_dispatch'

I am using the flask-sqlalchemy extension with alembic for migrations. When I try to add a new migration file and upgrade the schema to the latest one, I get the following error:
AttributeError: 'int' object has no attribute '_compiler_dispatch'
The content of the migration file:
revision = 'ec2c2d40eb1'
down_revision = '28dda873b826'
from alembic import op
import sqlalchemy as sa
def upgrade():
op.alter_column(
'users',
'wiki_permission',
new_column_name='wiki_group',
nullable=False,
existing_nullable=False,
type_=sa.Integer(),
existing_type=sa.Integer(),
server_default=1,
existing_server_default=1 # Line of error - 27
)
def downgrade():
op.alter_column(
'users',
'wiki_group',
new_column_name='wiki_permission',
nullable=False,
existing_nullable=False,
type_=sa.Integer(),
existing_type=sa.Integer(),
server_default=1,
existing_server_default=1
)
Thanks for taking time to help me.
Edit:
The complete error message :
INFO [alembic.migration] Context impl MySQLImpl.
INFO [alembic.migration] Will assume non-transactional DDL.
INFO [alembic.migration] Running upgrade 28dda873b826 -> ec2c2d40eb1, users change column wiki_permission to wiki_group
Traceback (most recent call last):
File "/home/kevin/Code/python/flask/terminus/venv/bin/alembic", line 9, in <module>
load_entry_point('alembic==0.6.5', 'console_scripts', 'alembic')()
File "/home/kevin/Code/python/flask/terminus/venv/local/lib/python2.7/site-packages/alembic/config.py", line 298, in main
CommandLine(prog=prog).main(argv=argv)
File "/home/kevin/Code/python/flask/terminus/venv/local/lib/python2.7/site-packages/alembic/config.py", line 293, in main
self.run_cmd(cfg, options)
File "/home/kevin/Code/python/flask/terminus/venv/local/lib/python2.7/site-packages/alembic/config.py", line 279, in run_cmd
**dict((k, getattr(options, k)) for k in kwarg)
File "/home/kevin/Code/python/flask/terminus/venv/local/lib/python2.7/site-packages/alembic/command.py", line 125, in upgrade
script.run_env()
File "/home/kevin/Code/python/flask/terminus/venv/local/lib/python2.7/site-packages/alembic/script.py", line 203, in run_env
util.load_python_file(self.dir, 'env.py')
File "/home/kevin/Code/python/flask/terminus/venv/local/lib/python2.7/site-packages/alembic/util.py", line 212, in load_python_file
module = load_module_py(module_id, path)
File "/home/kevin/Code/python/flask/terminus/venv/local/lib/python2.7/site-packages/alembic/compat.py", line 58, in load_module_py
mod = imp.load_source(module_id, path, fp)
File "alembic/env.py", line 77, in <module>
run_migrations_online()
File "alembic/env.py", line 70, in run_migrations_online
context.run_migrations()
File "<string>", line 7, in run_migrations
File "/home/kevin/Code/python/flask/terminus/venv/local/lib/python2.7/site-packages/alembic/environment.py", line 688, in run_migrations
self.get_context().run_migrations(**kw)
File "/home/kevin/Code/python/flask/terminus/venv/local/lib/python2.7/site-packages/alembic/migration.py", line 258, in run_migrations
change(**kw)
File "alembic/versions/ec2c2d40eb1_users_change_column_wiki_permission_to_.py", line 27, in upgrade
existing_server_default=1,
File "<string>", line 7, in alter_column
File "<string>", line 1, in <lambda>
File "/home/kevin/Code/python/flask/terminus/venv/local/lib/python2.7/site-packages/alembic/util.py", line 329, in go
return fn(*arg, **kw)
File "/home/kevin/Code/python/flask/terminus/venv/local/lib/python2.7/site-packages/alembic/operations.py", line 317, in alter_column
existing_autoincrement=existing_autoincrement
File "/home/kevin/Code/python/flask/terminus/venv/local/lib/python2.7/site-packages/alembic/ddl/mysql.py", line 44, in alter_column
else existing_autoincrement
File "/home/kevin/Code/python/flask/terminus/venv/local/lib/python2.7/site-packages/alembic/ddl/impl.py", line 76, in _exec
conn.execute(construct, *multiparams, **params)
File "/home/kevin/Code/python/flask/terminus/venv/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 720, in execute
return meth(self, multiparams, params)
File "/home/kevin/Code/python/flask/terminus/venv/local/lib/python2.7/site-packages/sqlalchemy/sql/ddl.py", line 67, in _execute_on_connection
return connection._execute_ddl(self, multiparams, params)
File "/home/kevin/Code/python/flask/terminus/venv/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 768, in _execute_ddl
compiled = ddl.compile(dialect=dialect)
File "<string>", line 1, in <lambda>
File "/home/kevin/Code/python/flask/terminus/venv/local/lib/python2.7/site-packages/sqlalchemy/sql/elements.py", line 468, in compile
return self._compiler(dialect, bind=bind, **kw)
File "/home/kevin/Code/python/flask/terminus/venv/local/lib/python2.7/site-packages/sqlalchemy/sql/ddl.py", line 25, in _compiler
return dialect.ddl_compiler(dialect, self, **kw)
File "/home/kevin/Code/python/flask/terminus/venv/local/lib/python2.7/site-packages/sqlalchemy/sql/compiler.py", line 197, in __init__
self.string = self.process(self.statement, **compile_kwargs)
File "/home/kevin/Code/python/flask/terminus/venv/local/lib/python2.7/site-packages/sqlalchemy/sql/compiler.py", line 220, in process
return obj._compiler_dispatch(self, **kwargs)
File "/home/kevin/Code/python/flask/terminus/venv/local/lib/python2.7/site-packages/sqlalchemy/ext/compiler.py", line 410, in <lambda>
lambda *arg, **kw: existing(*arg, **kw))
File "/home/kevin/Code/python/flask/terminus/venv/local/lib/python2.7/site-packages/sqlalchemy/ext/compiler.py", line 448, in __call__
return fn(element, compiler, **kw)
File "/home/kevin/Code/python/flask/terminus/venv/local/lib/python2.7/site-packages/alembic/ddl/mysql.py", line 171, in _mysql_change_column
autoincrement=element.autoincrement
File "/home/kevin/Code/python/flask/terminus/venv/local/lib/python2.7/site-packages/alembic/ddl/mysql.py", line 190, in _mysql_colspec
spec += " DEFAULT %s" % _render_value(compiler, server_default)
File "/home/kevin/Code/python/flask/terminus/venv/local/lib/python2.7/site-packages/alembic/ddl/mysql.py", line 179, in _render_value
return compiler.sql_compiler.process(expr)
File "/home/kevin/Code/python/flask/terminus/venv/local/lib/python2.7/site-packages/sqlalchemy/sql/compiler.py", line 220, in process
return obj._compiler_dispatch(self, **kwargs)
AttributeError: 'int' object has no attribute '_compiler_dispatch'
Alright, I just went through this same issue. I'm not using flask-sqlalchemy, just straight alembic, but it should be identical.
Second, it worked for me with sa.Integer with no parentheses, so I would recommend that.
The alembic docs say:
When producing MySQL-compatible migration files, it is recommended that the existing_type, existing_server_default, and existing_nullable parameters be present, if not being altered.
Since it seems you are not altering these columns, the docs suggest they should be present. So:
Remove nullable, type_, and server_default. They are not being altered.
Keep existing_nullable, existing_type, and existing_server_default.