I have two databases configured in my app:
DATABASES = {
'default': { ... }
'legacy': { ... }
}
The legacy database is only used in a particular part of the app (I've added it as a second database for convenience).
This works fine, except that when I try to run tests, Django tried to create a test database for the legacy database, causing an error:
Got an error creating the test database: (1044, "Access denied for user ... to to database 'test_...'")
How can I tell Django not to create a test database for the second legacy database?
I thought setting the following would work:
DATABASES['legacy']['TEST'] = {
'NAME': None,
'CREATE_DB': False
}
but that doesn't seem to help
Seems looks like a common issue with multiples databases and testing in Django. Here is the way I generally deal with it.
DATABASES = {
'default': { ... }
'legacy': { ... }
}
# You can add here any other type of control (not prod, debug == True, etc.)
if "test" in sys.argv:
del DATABASES["legacy"]
# Or
DATABASES = { "default": DATABASES["default"] }
This works great in the case you have only one setting file, you can easily adapt for other cases.
If you are handling many databases another option could be to start from the ground up inside your tests settings:
DATABASES = {
'default': { ... }
'legacy': { ... }
}
# You can add here any other type of control (not prod, debug == True, etc.)
if "test" in sys.argv:
DATABASES = {"default": {}}
DATABASES["default"]["ENGINE"] = "django.db.backends.sqlite3"
# Etc... Add want you need here.
Related
In flask the configuration module is pretty-straight forward and there are ample best practices on the same topic from internet.
If I had to develop an Application which supports multiple instances, for example lets say there is a database for every city supported by application and every city db is independent MongoDB instance hosted on different physical machines.
A sample API code to support my example:
from flask import Flask, request
from flask_restful import Resource, Api
app = Flask(__name__)
api = Api(app)
class CityPopulation(Resource):
def get(self, city_name):
'''
CODE TO GET CITY BASED DB Config
Currently in JSON format
'''
total_population = helper_city(city_name)
return { population : total_population }
api.add_resource(CityPopulation, '/<string:city_name>/population')
if __name__ == '__main__':
app.run(debug=True)
Currently what I've thought about is a json file with section for DB's as below
{
db:[{
'bengaluru' :{
'host' : 'bengaluru.host.db'
'port' : 27017,
'user_name' : 'some_user',
'password' : 'royalchallengers'
},
'hyderabad' :{
'host' : 'hyderabad.host.db'
'port' : 27017,
'user_name' : 'some_user',
'password' : 'sunrisers'
}
}]
}
and the class to read the configuration from JSON as
class project_config:
def __init__(self):
with open(config_full_path, 'r') as myfile:
configuration_raw = json.load(myfile)
In Flask, the config module best practices was suggested as below
class BaseConfig(object):
DEBUG = False
TESTING = False
class DevelopmentConfig(BaseConfig):
DEBUG = True
TESTING = True
class TestingConfig(BaseConfig):
DEBUG = False
TESTING = True
Is there a way for my scenario to be included the flask configuration and not to maintain a separate project configuration? In terms of best practices.
The object based config in the flask docs is described an "an interesting pattern" but it's just one approach, not necessarily best practice.
You can update the contents of app.config in any way that makes sense for your use case. You could fetch values at run time from a service like etcd, zookeeper, or consul, or set them all via environment variables (a useful pattern with containerized apps), or load them from a config file like this.
import os
import json
from flask import Flask
from ConfigParser import ConfigParser
app = Flask(__name__)
def load_config():
config = ConfigParser()
config.read(os.environ.get('MY_APP_CONFIG_FILE'))
for k, v in config.items('my_app'):
app.config[k] = v
#app.route('/')
def get_config():
return json.dumps(dict(app.config), default=str)
load_config()
And then run it like:
$ cat test.ini
[my_app]
thing = stuff
other_thing = junk
$ MY_APP_CONFIG_FILE=test.ini FLASK_APP=test.py flask run
$ curl -s localhost:5000 | jq '.'
{
"JSON_AS_ASCII": true,
"USE_X_SENDFILE": false,
"SESSION_COOKIE_SECURE": false,
"SESSION_COOKIE_PATH": null,
"SESSION_COOKIE_DOMAIN": null,
"SESSION_COOKIE_NAME": "session",
"LOGGER_HANDLER_POLICY": "always",
"LOGGER_NAME": "test",
"DEBUG": false,
"SECRET_KEY": null,
"EXPLAIN_TEMPLATE_LOADING": false,
"MAX_CONTENT_LENGTH": null,
"APPLICATION_ROOT": null,
"SERVER_NAME": null,
"PREFERRED_URL_SCHEME": "http",
"JSONIFY_PRETTYPRINT_REGULAR": true,
"TESTING": false,
"PERMANENT_SESSION_LIFETIME": "31 days, 0:00:00",
"PROPAGATE_EXCEPTIONS": null,
"TEMPLATES_AUTO_RELOAD": null,
"TRAP_BAD_REQUEST_ERRORS": false,
"thing": "stuff", <---
"JSON_SORT_KEYS": true,
"JSONIFY_MIMETYPE": "application/json",
"SESSION_COOKIE_HTTPONLY": true,
"SEND_FILE_MAX_AGE_DEFAULT": "12:00:00",
"PRESERVE_CONTEXT_ON_EXCEPTION": null,
"other_thing": "junk", <---
"SESSION_REFRESH_EACH_REQUEST": true,
"TRAP_HTTP_EXCEPTIONS": false
}
What I want to do is to failover Redis with Django, but cannot find the way to do it.
What I've setup is as follows:
I'm using Redis as a session backend.
I've setup two Redis servers in master-slave relationship that when master fails, slave automatically becomes master (using Sentinnel)
I setup settings.py like this
CACHES = {
'default': {
'BACKEND': 'redis_cache.RedisCache',
'LOCATION':[
"127.0.0.1",
"IPofSlave"
],
'OPTIONS': {
'PASSWORD': "xxxxxxxx",
'DB': 0,
}
}
}
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
SESSION_CACHE_ALIAS = "default"
I want Django to use only the master normally, and switch automatically to slave when it can't connect to the master.
How could I do this by editing settings.py or should I take another way around?
I would probably go with something like https://github.com/KabbageInc/django-redis-sentinel/blob/master/README.md which adds sentinel support to the Django Redis plugin. There may be others more suitable, this was top of the list in a Google search for Django sentinel.
I've got Redis set up as my cache in django, with the following setting:
CACHES = {
'default': {
'BACKEND': 'redis_cache.RedisCache',
'LOCATION': 'localhost:6379',
'OPTIONS': {
'PICKLE_VERSION': 1,
},
},
}
And I'm experimenting with it (new to Redis, want to understand it better). So, I go into my Django shell, and I do:
from django.core.cache import cache
cache.set('asdf', 2)
cache.get('asdf') # Returns 2
And then I go into redis-cli, where I expect to see the value, but none of these show any values:
KEYS *
GET *
GET 'asdf'
What's up with that?
Redis has 16 databases by default. As #Bernhard says in his comment, you can see how many keys each has with:
INFO KEYSPACE
Which in my case returned:
# Keyspace
db0:keys=1,expires=0,avg_ttl=0
db1:keys=2,expires=2,avg_ttl=504748260
And you can SELECT the database you want to inspect with:
SELECT 1
At which point, sure enough, I can see the keys I expected:
KEYS *
1) ":1:asdf"
2) ":1:django.contrib.sessions.cacheg2l0bo9z88z8bn4q2ep0andjgo8zrzzk"
I'm running into an issue that South creates the DB table for a new model as INNODB when I migrate but creates the table as MYISAM when another developer runs their own migration.
The problem with this is that all my other tables are MYISAM so using the new tables leads to many foreign key constraint errors.
How can I explicitly make sure the table is created using MYISAM?
What could be causing the table to be created using a different storage engine in different environments?
To be sure that all migrations are always done using INNODB, you should set the storage engine as INNODB in the database definition directly, like thisĀ :
DATABASES = {
'default': {
...
'OPTIONS' : { 'init_command' : 'SET storage_engine=INNODB', },
}
If you are using MySQL 5.7.x and above,
DATABASES = {
'default': {
...
'OPTIONS' : { 'init_command' : 'SET default_storage_engine=INNODB', },
}
But you should know that it can have a performance hit. So you may want to set this option only when running migrations.
If you use South, you can set the STORAGE_ENGINE.
django < 1.2
# add to your settings file
DATABASE_STORAGE_ENGINE = 'INNODB' # django < 1.2
django >= 1.2
# add to your settings file
DATABASES = {
'default': {
...
'STORAGE_ENGINE': 'INNODB'
}
}
I'm specifying the databases using a python object:
DATABASES = {
'default':{
'ENGINE':'mysql',
'NAME':'testsqldb',
'USER':'<username>',
'PASSWORD':'<password>',
},
'dynamic_data':{
'ENGINE': 'sqlite3',
'NAME':'',
'USER':'',
'PASSWORD':''
},
}
How can I specify the name of my test database? I've been trying to use TEST_NAME = 'auto_tests' in the settings.py file. However, when I run python manage.py tests <app_name> I get the following message:
Creating test database 'default'...
Got an error creating the test database: (1007, "Can't create database 'test_testsqldb'; database exists")
Type 'yes' if you would like to try deleting the test database 'test_testsqldb', or 'no' to cancel:
I'm expecting the system to create a separate database when running my tests, presumably called 'auto_tests_testsqldb'; however, it's still asking me about test_testsqldb.
Any advice is appreciated!
In Django 1.6 and below, TEST_NAME should be a key of one of your database dictionaries. But in Django 1.7 and above, you use a TEST key which is a dictionary of settings for test databases.
You probably want:
DATABASES = {
'default':{
'ENGINE':'mysql',
'NAME':'testsqldb',
'USER':'<username>',
'PASSWORD':'<password>',
'TEST': {
'NAME': 'auto_tests',
}
},
'dynamic_data':{
'ENGINE': 'sqlite3',
'NAME':'',
'USER':'',
'PASSWORD':''
},
}
Alternatively, perhaps you are wanting to use a different engine for your tests? In that case, I think you'll just have to create a separate settings file for testing. It can import from your standard settings module and override DATABASES.