Packaging Flask-Script and Flask-Migrate utilties with Flask application - flask

I have an application design using flask with flask-sqlalchemy. In order to control database migrations as the models change I am using flask-migrate to wrap alembic and use within the flask-script management context.
I am trying to decide how to split the package distribution to achieve the following goals
Minimal set of dependencies for the main application package
Allow distribution of management scripts and test data for migrations and deployments perhaps using a secondary package depending on the main application module
The project structure is as follows
/
tests/ #test data
migrations/ #alembic root include env.py and alembic.ini
myapp/ # application package
setup.py
manage.py
wsgi.py
My manage.py looks like a lot the following. This is how I avoid any alembic or flask-migrate dependencies in the main application package, by attaching the Migrate object to the app in manage.py. This also allows me to control migration configurations in the same config file as the general flask/app configurations (as the config context is pushed by the manager and with Migrate.init_app)
from myapp import db, create_app
from myapp.database import database_manager #sub manager for creating/dropping db
from flask_migrate import Migrate, MigrateCommand
def _create_app(*args, **kwargs):
app = create_app(*args, **kwargs)
if migration is not None:
migration.init_app(app)
return app
manager = Manager(_create_app)
migration = Migrate(db = db)
manager.add_command('database', database_manager)
manager.add_command('migration', MigrateCommand)
if __name__ == "__main__":
manager.run()
The application submanager myapp.database.database_manager enables such commands as python manage.py database create/drop/test_data which uses sqlalchemy to create tables and populate the table with test_data from tests/ directory, but does not hook in any migration scripts, and allows me to use python manage.py migration init/revision/migrate/... to execute flask-migrate/alembic commands using the application configuration context.
I am trying to distribute this application to deploy on our internal servers. There are two distribution use cases as they are currently being used
Install new server
Install source distribution
Create new config file to reflect DB host etc.
Use manage.py -c /path/to/config to create database tables with database create
Point HTTP Server to wsgi.py
Update existing server
Update source distribution
Use manage.py -c /path/to/server/config migration upgrade to pull up the databases using the current app context
Server continues to point to wsgi.py
I would like to move this distribution to wheels/packages so that I can automate deployment on our corp intranet, So what I'm seeing here is that I would like to split my package into the main package myapp with only Flask framework dependencies and wsgi entrypoint and myapp-manage which includes the manage.py , tests, and migrationsdirectory.
So in the perfect world, the application will be installed on the system with global configuration files, and a specific management user will have access to the management package to perform upgrades.
Right now I am angling towards splitting the distribution with setup-app.py and setup-manage.py to create two seperate packages from the same source distribution. Is there a more appropriate way to package migrations and management scripts with a flask application? I have dug through the docs and while its clear how to package a flask application, distribution strategies for management scripts and migrations is less clear.

Related

flask-migrate usage in production

This question is about the usage pattern of flask-migrate when it comes time to deploy. To set up a server or a docker container with your application, you need to create the databases.
Typically as in https://github.com/miguelgrinberg/flasky, the migrations folder is in the root of the project. This makes sense, but it means that in production, the migrations folder is not available if you are pulling the flask application as an installed package.
Is the correct pattern to copy the migrations folder to the container and run an upgrade there, or something else entirely? This seems awkward, because I would have to keep migrations in sync with the version of the app that I'm pulling from the python package repo. I am aware that it is possible to forego migrations entirely and just do db.create_all(), but if that is the answer, then I may be confused about the purpose of db migrations.
You can include files into a package with two-step:
1.set include_package_data to True in setup.py:
from setuptools import find_packages, setup
setup(
name='myapp',
version='1.0.0',
packages=find_packages(),
include_package_data=True, # <--
zip_safe=False,
install_requires=[
'flask',
],
)
2.Include the file pattern in MANIFEST.in:
graft myapp/static
graft myapp/templates
graft migrations # <--
This files will be included when you build the package. See here for the full MANIFEST.in command available.

Django initial data for built-in app

I'm starting to use the "redirects" app built into Django to replace some existing redirects I have in my urls.py. I was wondering if there was any generally accepted way to include initial data in the code base for other apps. For example, if it was for an app I created I could create a migration file that had a RunPython section where I could load some initial data. With a built-in or third-party app there doesn't seem to be any way to create a migration file to add initial data.
The best I can think of right now is to include a .sql file in my repository with the initial data and just manually import the data as I push the code to the different instances.
you can do it by using fixtures
create a folder name fixtures in your app directory
use this command to create a json file that you want to make as initial data.
python manage.py dumpdata you_app_name.model_name --indent 2 > model_name.json
copy this model_name.json to fixtures folder.
upload the code to the repo.
then after the migrate command . Type this command to load the initial data.
python manage.py loaddata model_name.json
reference

Can manage.py runserver execute npm scripts?

I am developing a web application with React for frontend and Django for backend. I use Webpack to watch for changes and bundle code for React apps.
The problem is that I have to run two commands concurrently, one for React and the other one for Django:
webpack --config webpack.config.js --watch
./manage.py runserver
Is there any way to customize runserver command to execute the npm script, like npm run start:dev? When you use Node.js as a backend platform, you can do the similar job like npm run build:client && npm run start:server.
If you are already using webpack and django, probably you can be interested in using webpack-bundle-tracker and django-webpack-loader.
Basically webpack-bundle-tracker will create an stats.json file each time the bundle is build, and django-webpack-loader will watch for those stats.json file to relaunch the dev server. This stack allows to separate the concerns between the server and the client.
There are a couple of posts out there explaining this pipeline.
I'm two and a half years late, but here's a management command that implements the solution that OP wanted, rather than a redirection to another solution. It inherits from the staticfiles runserver and runs webpack concurrently in a thread.
Create this management command at <some_app>/management/commands/my_runserver.py:
import os
import subprocess
import threading
from django.contrib.staticfiles.management.commands.runserver import (
Command as StaticFilesRunserverCommand,
)
from django.utils.autoreload import DJANGO_AUTORELOAD_ENV
class Command(StaticFilesRunserverCommand):
"""This command removes the need for two terminal windows when running runserver."""
help = (
"Starts a lightweight Web server for development and also serves static files. "
"Also runs a webpack build worker in another thread."
)
def add_arguments(self, parser):
super().add_arguments(parser)
parser.add_argument(
"--webpack-command",
dest="wp_command",
default="webpack --config webpack.config.js --watch",
help="This webpack build command will be run in another thread (should probably have --watch).",
)
parser.add_argument(
"--webpack-quiet",
action="store_true",
dest="wp_quiet",
default=False,
help="Suppress the output of the webpack build command.",
)
def run(self, **options):
"""Run the server with webpack in the background."""
if os.environ.get(DJANGO_AUTORELOAD_ENV) != "true":
self.stdout.write("Starting webpack build thread.")
quiet = options["wp_quiet"]
command = options["wp_command"]
kwargs = {"shell": True}
if quiet:
# if --quiet, suppress webpack command's output:
kwargs.update({"stdin": subprocess.PIPE, "stdout": subprocess.PIPE})
wp_thread = threading.Thread(
target=subprocess.run, args=(command,), kwargs=kwargs
)
wp_thread.start()
super(Command, self).run(**options)
For anyone else trying to write a command that inherits from runserver, note that you need to check for the DJANGO_AUTORELOAD_ENV variable to make sure you don't create a new thread every time django notices a .py file change. Webpack should be doing it's own auto-reloading anyway.
Use the --webpack-command argument to change the webpack command that runs (for example, I use --webpack-command 'vue-cli-service build --watch'
Use --webpack-quiet to disable the command's output, as it can get messy.
If you really want to override the default runserver, rename the file to runserver.py and make sure the app it lives in comes before django.contrib.static in your settings module's INSTALLED_APPS.
You shouldn't mess with the built-in management commands but you can make your own: https://docs.djangoproject.com/en/1.10/howto/custom-management-commands/.
On your place I'd leave runserver in place and create one to run your custom (npm in this case) script, i.e. with os.execvp.
In theory you could run two parallel subprocesses one that would execute for example django.core.management.execute_from_command_line and second to run your script. But it would make using tools like pbd impossible (which makes work very hard).
The way I do it is that I leverage Docker and Docker compose. Then when I use docker-compose up -d my database service, npm scripts, redis, etc run in the background (running runserver separately but that's another topic).

How do I load fixtures from third-party app for django in tests?

I know fixtures can be loaded in the tests.py like this:
fixtures = ['example.json']
Django automatically search all the apps' fixture directory and load the fixtures.
However, I create a reusable app named accounts and in accounts I also have the fixtures/example.json file. But when I installed the accounts app and write it into the INSTALLED_APP settings, the fixture cannot be loaded. I am curious why it happens.
Django == 1.8.2
You might be able to load those fixtures if:
Your app fixtures are accessible, if the app is in the same project structure they are directly accessible
If the app is installed by setup.py the files should be included with package_data={'mypkg': ['fixtures/*.json']} depending on your configuration
You have to know where your virtualenv is located in the local system, this way you can tell Django in settings where fixtures are located, imagine virtualenv is located in parent directory from project, you can point to fixtures like this:
Settings.py:
ENV_DIR = os.path.join(os.path.dirname(__file__), '..')
FIXTURE_DIRS = (
os.path.join(VENV_DIR, "virtualenv/lib/python2.7/site-packages/yourapp/test/fixtures"),
)
Note that it will depends on local system, not a clean solution, but it works, pay attention if you are running tests on remote systems like jenkins because you'll have to change this configuration according to the server deployment settings.

Pushing Django updates to production server

I need some advice on pushing Django updates, specifically the database updates, from my development server to my production server. I believe that updating the scripts, files, and such will be easy -- simply copy the new files from the dev server to the production server. However, the database updates are what have me unsure.
For Django, I have been using South during initial web app creation to change the database schema. If I were to have some downtime on the production server for updates, I could copy all the files over to the production server. These would include and changed models.py files which describe the database tables. I could then perform a python manage.py schemamigration my_app --auto and then a python migrate my_app to update the database based on the new files/models.py I've copied over.
Is this an OK solution or are there more appropriate ways to go about updating a database from development to production servers?
Your thoughts?
Thanks
Actually, python manage.py schemamigration my_app --auto will only create the migration based on the changes in models.py. To actually apply the migration to the database, you need to run python manage.py migrate my_app. Another option would be to create migrations (by running schemamigration) on the development server, and then copy over the migration files to the production server and apply migrations by running migrate.
Of course, having a source code repository would be way better than copying the files around. You could create migrations on your development server, commit them to the repository, on the production server pull the new files from repository, and finally apply migrations.