Simulating connection errors with SQLAlchemy - unit-testing

On production, my Flask / SQLAlchemy app randomly throws psycopg2.OperationalError: server closed the connection unexpectedly on RDS Aurora. Until I track down the issue, I would like my unit tests to ensure that I handle this properly, e.g. that my rollback mechanism is effective, etc.
Right now, I mock the Session.commit method to throw a fake exception, but this does not (afaik?) leave the session in a failed state that needs actual rollback.
What are some reliable ways to simulate an actual connection failure in my local, Docker Compose-based development environment?

Rollback with psycopg2
conn = psycopg2.connect(
f"host='{credentials['host']}' port={credentials['port']} dbname='{credentials['dbname']}' user='{credentials['user']}' password='{credentials['password']}'"
)
print("Connected to database")
print("Setting autocommit false")
conn.autocommit = False
func_name= 'procedure_update_users'
print(f"Executing function {func_name}...")
try:
cur = conn.cursor()
query = f"""SELECT {func_name}();"""
print(query)
cur.execute(query)
row = cur.fetchone()
conn.commit()
print(f"Function {func_name} executed.")
except Exception as e:
print(e)
conn.rollback()
raise Exception(e)
To get OperationalError I did:
try:
cur.execute("LOCK TABLE mytable IN ACCESS EXCLUSIVE MODE NOWAIT")
except psycopg2.OperationalError as e:
cur.execute("rollback")
raise Exception(e)
psycopg2.OperationalError: fe_sendauth: no password supplied

Related

Connectivity issues to SAP SQL Anywhere database as a secondary database with Django

I can connect to a SAP SQL Anywhere database with FreeTDS and pyodbc as follows:
# hello_sybase.py
import pyodbc
try:
con = pyodbc.connect('Driver={FreeTDS};'
'Server=10.60.1.6,2638;'
'Database=blabla;'
'uid=blabla;pwd=blabla')
cur = con.cursor()
cur.execute("Select * from Test")
for row in cur.fetchall():
print (row)
cur.close()
con.close()
except Exception as e:
print(str(e))
I tried to connect in a Django view as follows:
import pyodbc
CONN_STRING = 'Driver={FreeTDS};Server=10.60.1.6,2638;Database=blabla;uid=blabla;pwd=blabla'
def my_view(request):
with pyodbc.connect(CONN_STRING) as conn:
cur = conn.cursor()
cur.execute('SELECT * FROM test')
rows = list(cur.fetchall())
return render(request, 'my_template.html', {'rows': rows})
When I run python manage.py runserver and run the code in the above view.
I have this error message
'08001', '[08001] [FreeTDS][SQL Server]Unable to connect to data source (0) (SQLDriverConnect)')
I tried to put TDS_Version=7.4 as was mentioned here in the comment, but it didn't helped.
Is it possible that these are issues with the threading as is said in that comment?
How can I fix it? The code works without Django, but with python manage.py runserver it doesn't work.
To be more precise, I use this code snippet in the view
if second_form.is_valid():
try:
con = pyodbc.connect(CONN_STRING)
con.setdecoding(pyodbc.SQL_CHAR, encoding='cp1252')
con.setdecoding(pyodbc.SQL_WCHAR, encoding='cp1252')
con.setencoding(encoding='cp1252')
cur = con.cursor()
cur.execute("Select * from test")
result2 = list(cur.fetchall())
print(results2)
cur.close()
con.close()
context['result2'] = result2
context['form2'] = SecondForm(request.POST)
except Exception as e:
print (str(e))
Here is about the error message
This SQLSTATE is returned for one or more of the following reasons:
Db2 ODBC is not able to establish a connection with the data source.
The connection request is rejected because a connection that was established with embedded SQL already exists.
Is it problem with my FreeTDS version? How can I safely upgrade it on Ubuntu 18.04?

Psycopg2 & Flask - tying connection to before_request & teardown_appcontext

Cheers guys,
refactoring my Flask app I got stuck at tying the db connection to #app.before_request and closing it at #app.teardown_appcontext. I am using plain Psycopg2 and the app factory pattern.
First I created a function to call wihtin the app factory so I could use #app as suggested by Miguel Grinberg here:
def create_app(test_config=None):
app = Flask(__name__, instance_relative_config=True)
--
from shop.db import connect_and_close_db
connect_and_close_db(app)
--
return app
Then I tried this pattern suggested on http://flask.pocoo.org/docs/1.0/appcontext/#storing-data:
def connect_and_close_db(app):
#app.before_request
def get_db_test():
conn_string = "dbname=testdb user=testuser password=test host=localhost"
if 'db' not in g:
g.db = psycopg2.connect(conn_string)
return g.db
#app.teardown_appcontext
def close_connection(exception):
db = g.pop('db', None)
if db is not None:
db.close()
It resulted in:
TypeError: 'psycopg2.extensions.connection' object is not callable
Anyone has an idea what happend and how to make it work?
Furthermore I wonder how I would access the connection object for creating a cursor once its creation is tied to before_request?
This solution is probably far from perfect, and it's not really DRY. I'd welcome comments, or other answers that build on this.
To implement for raw psycopg2 support, you probably need to take a look at the connection pooler. There's also a good guide on how to implement this outwith Flask.
The basic idea is to create your connection pool first. You want this to be established when the flask application initializes (This could within the python interpreter or via gunicorn worker of which there may be several - in which case each worker has its own connection pool). I chose to store the returned pool in the config:
from flask import Flask, g, jsonify
import psycopg2
from psycopg2 import pool
app = Flask(__name__)
app.config['postgreSQL_pool'] = psycopg2.pool.SimpleConnectionPool(1, 20,
user = "postgres",
password = "very_secret",
host = "127.0.0.1",
port = "5432",
database = "postgres")
Note the first two arguments to SimpleConnectionPool are the min & max connections. That's the number of connections going to your database server, bwtween 1 & 20 in this case.
Next define a get_db function:
def get_db():
if 'db' not in g:
g.db = app.config['postgreSQL_pool'].getconn()
return g.db
The SimpleConnectionPool.getconn() method used here simply returns a connection from the pool, which we assign to g.db and return. This means when we call get_db() anywhere in the code it returns the same connection, or creates a connection if not present. There's no need for a before.context decorator.
Do define your teardown function:
#app.teardown_appcontext
def close_conn(e):
db = g.pop('db', None)
if db is not None:
app.config['postgreSQL_pool'].putconn(db)
This runs when the application context is destroyed, and uses SimpleConnectionPool.putconn() to put away the connection.
Finally define a route:
#app.route('/')
def index():
db = get_db()
cursor = db.cursor()
cursor.execute("select 1;")
result = cursor.fetchall()
print (result)
cursor.close()
return jsonify(result)
This code works for me tested against postgres runnning in a docker container. A few things which probably should be improved:
This view isn't very DRY. Perhaps you could move some of this into the get_db function so it returns a cursor. (!!!)
When the python interpreter exits, you should also find away to close the connection with app.config['postgreSQL_pool'].closeall
Although tested some kind of way to monitor the pool would be good, so that you could watch pool/db connections under load and make sure the pooler behaves as expected.
(!!!)In another land, the sqlalchemy.scoped_session documentation explains more things relating to this, with some theory on how its 'sessions' work in relation to requests. They have implemented it in such a way that you can call Session.query('SELECT 1') and it will create the session if it doesn't already exist.
EDIT: Here's a gist with your app factory pattern, and sample usage in the comment.
Currently I am using this pattern:
(I ll edit this answer eventually if I came up with better solution)
This is main script in which we use database. It uses two functions from config: get_db() to get connection from pool and put_db() to return connection into pool:
from config import get_db, put_db
from threading import Thread
from time import sleep
def select():
db = get_db()
sleep(1)
cursor = db.cursor()
# Print select result and db connection address in memory
# To see if it gets connection from another addreess on second thread
cursor.execute("SELECT 'It works %s'", (id(db),))
print(cursor.fetchone())
cursor.close()
put_db(db)
Thread(target=select).start()
Thread(target=select).start()
print('Main thread')
This is config.py:
import sys
import os
import psycopg2
from psycopg2 import pool
from dotenv import load_dotenv, find_dotenv
load_dotenv(find_dotenv())
def get_db(key=None):
return getattr(get_db, 'pool').getconn(key)
def put_db(conn, key=None):
getattr(get_db, 'pool').putconn(conn, key=key)
# So we here need to init connection pool in main thread in order everything to work
# Pool is initialized under function object get_db
try:
setattr(get_db, 'pool', psycopg2.pool.ThreadedConnectionPool(1, 20, os.getenv("DB")))
print(color.red('Initialized db'))
except psycopg2.OperationalError as e:
print(e)
sys.exit(0)
And also if you are curious there is an .env file containing db connection string in DB env variable:
DB="dbname=postgres user=postgres password=1234 host=127.0.0.1 port=5433"
(.env file is loaded using dotenv module in config.py)

celery, flask sqlalchemy: DatabaseError: (DatabaseError) SSL error: decryption failed or bad record mac

Hi I have a setup where I'm using Celery Flask SqlAlchemy and I am intermittently getting this error:
(psycopg2.DatabaseError) SSL error: decryption failed or bad record mac
I followed this post:
Celery + SQLAlchemy : DatabaseError: (DatabaseError) SSL error: decryption failed or bad record mac
and also a few more and added a prerun and postrun methods:
#task_postrun.connect
def close_session(*args, **kwargs):
# Flask SQLAlchemy will automatically create new sessions for you from
# a scoped session factory, given that we are maintaining the same app
# context, this ensures tasks have a fresh session (e.g. session errors
# won't propagate across tasks)
d.session.remove()
#task_prerun.connect
def on_task_init(*args, **kwargs):
d.engine.dispose()
But I'm still seeing this error. Anyone solved this?
Note that I'm running this on AWS (with two servers accessing same database). Database itself is hosted on it's own server (not RDS). I believe the total celery background tasks running are 6 (2+4). Flask frontend is running using gunicorn.
My related thread:
https://github.com/celery/celery/issues/3238#issuecomment-225975220
Here is my comment along with additional information:
I use Celery, SQLAlchemy and PostgreSQL on AWS and there is no such problem.
The only difference I can think of is that I have the database on RDS.
I think you can try switching to RDS temporary, just to test if the
issue will be still present or not. If it disappered with RDS then
you'll need to look into PostgreSQL settings.
According to the RDS paramters, I have SSL enabled:
ssl = 1, Enables SSL connections.
ssl_ca_file = /rdsdbdata/rds-metadata/ca-cert.pem
ssl_cert_file = /rdsdbdata/rds-metadata/server-cert.pem
ssl_ciphers = false, Sets the list of allowed SSL ciphers.
ssl_key_file = /rdsdbdata/rds-metadata/server-key.pem
ssl_renegotiation_limit = 0, integer, (kB) Set the amount of traffic to send and receive before renegotiating the encryption keys.
As for Celery initialization code, it is roughly this
from sqlalchemy.orm import scoped_session
from sqlalchemy.orm import sessionmaker
import sqldb
engine = sqldb.get_engine()
cached_data = None
def do_the_work():
global engine, ruckus_data
if cached_data is not None:
return cached_data
db_session = None
try:
db_session = scoped_session(sessionmaker(
autocommit=False, autoflush=False, bind=engine))
data = sqldb.get_session().query(
sqldb.system.MyModel).filter_by(
my_type = sqldb.system.MyModel.TYPEA).all()
cached_data = {}
for row in data:
... # put row into cached_data
finally:
if db_session is not None:
db_session.remove()
return cached_data
This do_the_work function is then called from the celery task.
The sqldb.get_engine looks like this:
from sqlalchemy import create_engine
_engine = None
def get_engine():
global _engine
if _engine:
return _engine
_engine = create_engine(config.SQL_DB_URL, echo=config.SQL_DB_ECHO)
return _engine
Finally, the SQL_DB_URI and SQL_DB_ECHO in the config module are these:
SQL_DB_URL = 'postgresql+psycopg2://%s:%s#%s/%s' % (
POSTGRES_USER, POSTGRES_PASSWORD, POSTGRES_HOST, POSTGRES_DB_NAME)
SQL_DB_ECHO = False

Enable integrity checking with sqlite in django

In my django project, I use mysql db for production, and sqlite for tests.
Problem is, some of my code rely on model integrity checking. It works well with mysql, but integrity errors are not thrown when the same code is executed in tests.
I know that foreign keys checking must be activated in sqlite :
PRAGMA foreign_keys = 1;
However, I don't know where is the best way to do this activation (same question here).
Moreover, the following code won't work :
def test_method(self):
from django.db import connection
cursor = connection.cursor()
cursor.execute('PRAGMA foreign_keys = ON')
c = cursor.execute('PRAGMA foreign_keys')
print c.fetchone()
>>> (0,)
Any ideas?
So, if finally found the correct answer. All I had to do was to add this code in the __init__.py file in one of my installed app:
from django.db.backends.signals import connection_created
def activate_foreign_keys(sender, connection, **kwargs):
"""Enable integrity constraint with sqlite."""
if connection.vendor == 'sqlite':
cursor = connection.cursor()
cursor.execute('PRAGMA foreign_keys = ON;')
connection_created.connect(activate_foreign_keys)
You could use django signals, listening to post_syncdb.
from django.db.models.signals import post_syncdb
def set_pragma_on(sender, **kwargs):
"your code here"
post_syncdb.connect(set_pragma_on)
This ensures that whenever syncdb is run (syncdb is run, when creating the test database), that your SQLite database has set 'pragma' to 'on'. You should check which database you are using in the above method 'set_pragma_on'.

Choose test database?

I'm trying to run
./manage.py test
But it tells me
Got an error creating the test database: permission denied to create database
Obviously it doesn't have permission to create the database, but I'm on a shared server, so there's not much I can do about that. I can create a new database through the control panel but I don't think there's any way I can let Django do it automatically.
So, can't I create the test database manually and instead tell Django to flush it every time, rather than recreating the whole thing?
I had a similar issue. But I wanted Django to just bypass the creation of a test database for one of my instances (it is not a mirror tough). Following Mark's suggestion, I created a custom test runner, as follows
from django.test.simple import DjangoTestSuiteRunner
class ByPassableDBDjangoTestSuiteRunner(DjangoTestSuiteRunner):
def setup_databases(self, **kwargs):
from django.db import connections
old_names = []
mirrors = []
for alias in connections:
connection = connections[alias]
# If the database is a test mirror, redirect its connection
# instead of creating a test database.
if connection.settings_dict['TEST_MIRROR']:
mirrors.append((alias, connection))
mirror_alias = connection.settings_dict['TEST_MIRROR']
connections._connections[alias] = connections[mirror_alias]
elif connection.settings_dict.get('BYPASS_CREATION','no') == 'no':
old_names.append((connection, connection.settings_dict['NAME']))
connection.creation.create_test_db(self.verbosity, autoclobber=not self.interactive)
return old_names, mirrors
Then I created an extra dict entry in one of my databases entries inside settings.py, 'BYPASS_CREATION':'yes',
Finally, I configured a new TestRunner with
TEST_RUNNER = 'auth.data.runner.ByPassableDBDjangoTestSuiteRunner'
I would suggest using sqlite3 for testing purposes while keeping on using mysql/postgres/etc for production.
This can be achieved by placing this in your settings file:
if 'test' in sys.argv:
DATABASES['default'] = {'ENGINE': 'django.db.backends.sqlite3'}
see Running django tests with sqlite
a temporary sqlite database file will be created in your django project home which you will have write access to. The other advantage is that sqlite3 is much faster for testing. You may however run in to problems if you are using any mysql/postgres specific raw sql (which you should try to avoid anyway).
I think a better solution might be to define your own test runner.
I added this to the comments above but it got kind of lost - recent changes to webfaction make this MUCH easier. You can now create new private database instances.
Follow the instructions there, and when creating a new user make sure to give them the permission to ALTER USER new_username CREATEDB;.
You probably also should change the default cron settings so they don't try to check if this database is up and runnings as frequently.
You could use django-nose as your TEST_RUNNER. Once installed, if you pass the following environment variable, it will not delete and re-create the database (create it manually yourself first).
REUSE_DB=1 ./manage.py test
You can also add the following to settings.py so you don't have to write REUSE_DB=1 every time you want to run tests:
os.environ['REUSE_DB'] = "1"
Note: this will also leave all your tables in the databases which means test setup will be a little quicker, but you will have to manually update the tables (or delete and re-create the database yourself) when you change your models.
my variant to reusing database:
from django.test.simple import DjangoTestSuiteRunner
from django.core.management import call_command
class TestRunner(DjangoTestSuiteRunner):
def setup_databases(self, **kwargs):
from django.db import connections
settings = connections['default'].settings_dict
settings['NAME'] = settings['TEST_NAME']
settings['USER'] = settings['TEST_USER']
settings['PASSWORD'] = settings['TEST_PASSWD']
call_command('syncdb', verbosity=1, interactive=False, load_initial_data=False)
def teardown_databases(self, old_config, **kwargs):
from django.db import connection
cursor = connection.cursor()
cursor.execute('show tables;')
parts = ('DROP TABLE IF EXISTS %s;' % table for (table,) in cursor.fetchall())
sql = 'SET FOREIGN_KEY_CHECKS = 0;\n' + '\n'.join(parts) + 'SET FOREIGN_KEY_CHECKS = 1;\n'
connection.cursor().execute(sql)
The following is a django test suite runner to create database using Webfaction XML-RPC API. Note, setting up the database using the API may take up to a minute, and the script may appear to be stuck momentarily, just wait for a little while.
NOTE: there is a security risk of having control panel password in the webfaction server, because someone breaching into your web server SSH could take over your Webfaction account. If that is a concern, set USE_SESSKEY to True and use the fabric script below this script to pass a session id to the server. The session key expires in 1 hour from the last API call.
File test_runner.py: in the server, you need to configure ./manage.py test to use WebfactionTestRunner
"""
This test runner uses Webfaction XML-RPC API to create and destroy database
"""
# you can put your control panel username and password here.
# NOTE: there is a security risk of having control panel password in
# the webfaction server, because someone breaching into your web server
# SSH could take over your Webfaction account. If that is a concern,
# set USE_SESSKEY to True and use the fabric script below this script to
# generate a session.
USE_SESSKEY = True
# CP_USERNAME = 'webfactionusername' # required if and only if USE_SESSKEY is False
# CP_PASSWORD = 'webfactionpassword' # required if and only if USE_SESSKEY is False
import sys
import os
from django.test.simple import DjangoTestSuiteRunner
from django import db
from webfaction import Webfaction
def get_sesskey():
f = os.path.expanduser("~/sesskey")
sesskey = open(f).read().strip()
os.remove(f)
return sesskey
if USE_SESSKEY:
wf = Webfaction(get_sesskey())
else:
wf = Webfaction()
wf.login(CP_USERNAME, CP_PASSWORD)
def get_db_user_and_type(connection):
db_types = {
'django.db.backends.postgresql_psycopg2': 'postgresql',
'django.db.backends.mysql': 'mysql',
}
return (
connection.settings_dict['USER'],
db_types[connection.settings_dict['ENGINE']],
)
def _create_test_db(self, verbosity, autoclobber):
"""
Internal implementation - creates the test db tables.
"""
test_database_name = self._get_test_db_name()
db_user, db_type = get_db_user_and_type(self.connection)
try:
wf.create_db(db_user, test_database_name, db_type)
except Exception as e:
sys.stderr.write(
"Got an error creating the test database: %s\n" % e)
if not autoclobber:
confirm = raw_input(
"Type 'yes' if you would like to try deleting the test "
"database '%s', or 'no' to cancel: " % test_database_name)
if autoclobber or confirm == 'yes':
try:
if verbosity >= 1:
print("Destroying old test database '%s'..."
% self.connection.alias)
wf.delete_db(test_database_name, db_type)
wf.create_db(db_user, test_database_name, db_type)
except Exception as e:
sys.stderr.write(
"Got an error recreating the test database: %s\n" % e)
sys.exit(2)
else:
print("Tests cancelled.")
sys.exit(1)
db.close_connection()
return test_database_name
def _destroy_test_db(self, test_database_name, verbosity):
"""
Internal implementation - remove the test db tables.
"""
db_user, db_type = get_db_user_and_type(self.connection)
wf.delete_db(test_database_name, db_type)
self.connection.close()
class WebfactionTestRunner(DjangoTestSuiteRunner):
def __init__(self, *args, **kwargs):
# Monkey patch BaseDatabaseCreation with our own version
from django.db.backends.creation import BaseDatabaseCreation
BaseDatabaseCreation._create_test_db = _create_test_db
BaseDatabaseCreation._destroy_test_db = _destroy_test_db
return super(WebfactionTestRunner, self).__init__(*args, **kwargs)
File webfaction.py: this is a thin wrapper for Webfaction API, it need to be importable by both test_runner.py (in the remote server) and the fabfile.py (in the local machine)
import xmlrpclib
class Webfaction(object):
def __init__(self, sesskey=None):
self.connection = xmlrpclib.ServerProxy("https://api.webfaction.com/")
self.sesskey = sesskey
def login(self, username, password):
self.sesskey, _ = self.connection.login(username, password)
def create_db(self, db_user, db_name, db_type):
""" Create a database owned by db_user """
self.connection.create_db(self.sesskey, db_name, db_type, 'unused')
# deletes the default user created by Webfaction API
self.connection.make_user_owner_of_db(self.sesskey, db_user, db_name, db_type)
self.connection.delete_db_user(self.sesskey, db_name, db_type)
def delete_db(self, db_name, db_type):
try:
self.connection.delete_db_user(self.sesskey, db_name, db_type)
except xmlrpclib.Fault as e:
print 'ignored error:', e
try:
self.connection.delete_db(self.sesskey, db_name, db_type)
except xmlrpclib.Fault as e:
print 'ignored error:', e
File fabfile.py: A sample fabric script to generate session key, needed only if USE_SESSKEY=True
from fabric.api import *
from fabric.operations import run, put
from webfaction import Webfaction
import io
env.hosts = ["webfactionusername#webfactionusername.webfactional.com"]
env.password = "webfactionpassword"
def run_test():
wf = Webfaction()
wf.login(env.hosts[0].split('#')[0], env.password)
sesskey_file = '~/sesskey'
sesskey = wf.sesskey
try:
put(io.StringIO(unicode(sesskey)), sesskey_file, mode='0600')
# put your test code here
# e.g. run('DJANGO_SETTINGS_MODULE=settings /path/to/virtualenv/python /path/to/manage.py test --testrunner=test_runner.WebfactionTestRunner')
raise Exception('write your test here')
finally:
run("rm -f %s" % sesskey_file)
The accepted answer didn't work for me. It's so outdated, that it didn't run on my legacy codebase with djano 1.5.
I wrote a blogpost entirely describing how I solved this issue by creating an alternative test runner and changing django settings to provide all the required config and to use new test runner.
You need to specify a sqlite ENGINE when using unit tests. Open the settings.py and add the just after DATABASES section:
import sys
if 'test' in sys.argv or 'test_coverage' in sys.argv: #Covers regular testing and django-coverage
DATABASES['default']['ENGINE'] = 'django.db.backends.sqlite3'
DATABASES['default']['NAME'] = ':memory:'
Modify the following methods in django/db/backends/creation.py:
def _destroy_test_db(self, test_database_name, verbosity):
"Internal implementation - remove the test db tables."
# Remove the test database to clean up after
# ourselves. Connect to the previous database (not the test database)
# to do so, because it's not allowed to delete a database while being
# connected to it.
self._set_test_dict()
cursor = self.connection.cursor()
self.set_autocommit()
time.sleep(1) # To avoid "database is being accessed by other users" errors.
cursor.execute("""SELECT table_name FROM information_schema.tables WHERE table_schema='public'""")
rows = cursor.fetchall()
for row in rows:
try:
print "Dropping table '%s'" % row[0]
cursor.execute('drop table %s cascade ' % row[0])
except:
print "Couldn't drop '%s'" % row[0]
#cursor.execute("DROP DATABASE %s" % self.connection.ops.quote_name(test_database_name))
self.connection.close()
def _create_test_db(self, verbosity, autoclobber):
"Internal implementation - creates the test db tables."
suffix = self.sql_table_creation_suffix()
if self.connection.settings_dict['TEST_NAME']:
test_database_name = self.connection.settings_dict['TEST_NAME']
else:
test_database_name = TEST_DATABASE_PREFIX + self.connection.settings_dict['NAME']
qn = self.connection.ops.quote_name
# Create the test database and connect to it. We need to autocommit
# if the database supports it because PostgreSQL doesn't allow
# CREATE/DROP DATABASE statements within transactions.
self._set_test_dict()
cursor = self.connection.cursor()
self.set_autocommit()
return test_database_name
def _set_test_dict(self):
if "TEST_NAME" in self.connection.settings_dict:
self.connection.settings_dict["NAME"] = self.connection.settings_dict["TEST_NAME"]
if "TEST_USER" in self.connection.settings_dict:
self.connection.settings_dict['USER'] = self.connection.settings_dict["TEST_USER"]
if "TEST_PASSWORD" in self.connection.settings_dict:
self.connection.settings_dict['PASSWORD'] = self.connection.settings_dict["TEST_PASSWORD"]
Seems to work... just add the extra settings to your settings.py if you need 'em.
Simple workaround: change TEST_DATABASE_PREFIX in django/db/backends/base/creation.py as you like.