Adonis migration:fresh command to recreate database and seed - adonis.js

Laravel has a command php artisan migrate:fresh, that "drop all tables from the database and then execute the migrate command".
While coding some new migration, sometimes we need to migrate the database from scratch.
Running refresh and reset depends on that all down methods are ok, but while developing, sometimes it's not ready to downgrade.
So, having a migration:fresh, would be good to really recreate the schema.

I've created a command MigrationFresh.js that will work like that for mysql and pg for now.
After installed, you must call this to recreate the database from scratch and migrate:
adonis migration:fresh
If you want to seed after migrate, run:
adonis migration:fresh --seed

To do it easier now in adonis 5 we could create a new command by
node ace make:command MigrationFresh
then add the code
public async run() {
await execa.node('ace',['migration:rollback'])
console.log('Rollback all tables')
await execa.node('ace',['migration:run'])
console.log('Migrated all tables')
}

Related

How to run Django migrations inside kubernetes and apply changes to the database pod?

I have deployed my app to Azure with Kubernetes. I have separate pods for front end, back end and database. I encountered problems when one of my database fields changed from id to userId. I have tried to update this change to my deployment database but without any luck. I logged into my back end pod and removed the existing migration file and run python manage.py makemigrations & python manage.py migrate. After these I checked that everything was ok in the migration file. After this I don't know what to do. Do I need to remove the database pod and create it again? Or how do I update the database inside the pod?
id -> userId change is a DDL change for your DB. I suggest that you "exec" into your DB pod and start your DB shell.
kubectl exec -it mysql-pod-name bash
Then you should be able to execute your DDL statement. MySql example:
ALTER TABLE tableName
RENAME COLUMN id TO userId;

How to migrate db in django using CI/CD with Google CloudSQL?

I have a django repository setup in gitlab and I am trying to automate build and deploy on google cloud using gitlab CI/CD.
The app has to be deployed on App Engine and has to use CloudSQL for dynamic data storage.
Issue that I am facing is while executing migration on the db, before deploying my application.
I am supposed to run ./manage.py migrate which connects to cloudSQL.
I have read that we can use cloud proxy to connect to cloudSQL and migrate db. But it kind of seems like a hack. Is there a way to migrate my db via CI/CD pipeline script?
Any help is appreciated. Thanks.
When running Django in the App Engine Standard environment the recommended way of approaching database migration is to run ./manage.py migrate directly from the Console shell or from your local machine (which requires using the cloud sql proxy).
If you want the database migration to be decoupled from your application deployment and run it in Gitlab CI/CD you could do something along these lines:
Use as base image google/cloud-sdk:latest
Acquire credentials gcloud auth activate-service-account --key-file $GOOGLE_SERVICE_ACCOUNT_FILE
Download the cloudsqlproxy with wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 -O cloud_sql_proxy and make it executable chmod +x cloud_sql_proxy.
Start the proxy ./cloud_sql_proxy -instances="[YOUR_INSTANCE_CONNECTION_NAME]"=tcp:3306.
Finally, run the migration script.
You might also create a custom docker image that already does what's above behind the scenes, the result would be the same.
If you want to read further on the matter I suggest taking a look at the following articles link1 link2.
I'm also trying to find the correct way to do that.
One other hacky way would be to just add a call to it in the settings file that is loaded with your app.
something like the migrate.py file does:
from django.core.management import execute_from_command_line
execute_from_command_line(['./manage.py', 'migrate'])
so everytime after you'll deploy a new version of the app it will also run the migrate.
I want to beleive there are other ways, not involving the proxy, especially if you also want to work with a private ip for the sql - then this script must run in the same vpc.

flask-migrate: how to add a new database

I have a flask setup with development & staging environments.
Now I want to add a production environment wit a production database.
I have troubles integrating the new database into flask-migrations.
I did these steps:
created fresh postgres DB
ran db.create_all() from the flask app
(resulting in a DB reflecting the latest version of the data model)
now all flask-migrate commands have errors
sqlalchemy.exc.ProgrammingError: (psycopg2.ProgrammingError) column "testfield" of relation "customer_feedback" already exists,
because flask migrate
seems to think it needs to apply all migrations that have been
created until today. But they are not necessary because DB is already fully reflecting models.py.
How can I convince flask-migrate to accept the current state as fully migrated?
Or whats the standard workflow for this?
In other words:
I am coming from Django, where the migrate command creates and updates the model if necessary when adding a blank DB. How should it be done with flask?
You need to tell flask migrate that db has already been created and all requirements are already fulfilled. Try following command -
flask db stamp head
This will tell flask migrate not to attempt to add anything.

In django, does "migrate" just do database stuff?

I understand migrate sets up the database according to the migration, but the scope of it's operation seems to go beyond that.
When I ran manage.py migrate, not only did it apply the database migrations, which is good, but it also went through the main url.py and started to execute the methods I had in there which I would like only to be executed only once when the "web starts".
What does migrate actually do?
Thanks!
migrate it is the django command, so when you run it through ./manage.py you also run django setup() and it validate(execute) project scripts.

Programmatically check whether there are django south migrations that need to be deployed

My deployment strategy looks like this (using Fabric):
create a new virtualenv
deploy new code in new virtualenv
show a maintenance page
copy the current db to new db
migrate new db
point new code to new db
symlink current virtualenv to new venv
restart services
remove maintenance page
I want to iterate fast. Now, most of the code changes do not contain migrations. Also, the db is growing, so there is much overhead created by copying the database everytime I deploy a (mostly small) change. To avoid copying the database I want to check whether there are migrations that need to be deployed (prior to step 4). If there are no migrations, I can go straight from step 2 to step 7. If there are, I will follow all the steps. For this, I need to check programmatically whether there are migrations that need to be deployed. How can I do this?
In step 2 while deploying the new code, you could deploy a script which when run on the server will detect if there are new migrations.
Example code is as follows:
# copied mostly from south.management.commands.migrate
from south import migration
from south.models import MigrationHistory
apps = list(migration.all_migrations())
applied_migrations = MigrationHistory.objects.filter(app_name__in=[app.app_label() for app in apps])
applied_migrations = ['%s.%s' % (mi.app_name,mi.migration) for mi in applied_migrations]
num_new_migrations = 0
for app in apps:
for migration in app:
if migration.app_label() + "." + migration.name() not in applied_migrations:
num_new_migrations = num_new_migrations + 1
return num_new_migrations
If you wrap the above code up in a script, your fabric deployment script can use the run operation to get the number of new migrations.
If this returns zero, then you can skip the steps associated with copying the database.
./manage.py migrate --all --merge --list | grep "( )"
Will tell and show you which migrations. If you want a return code or count, use wc.
This has the advantages of not copying and pasting code like the accepted answer (violating DRY), and also if the internal south api changes your code will still work.
UPDATE:
Django 1.7 changed the output to use bracket instead of parenthesis and Django 1.8 introduced a showmigration command:
./manage.py showmigrations --list | grep '[ ]'
dalore's answer updated for Django 1.7+
./manage.py migrate --list | grep "\[ ]"
If you just want the count then:
./manage.py migrate --list | grep "\[ ]" | wc -l
Why are you moving the databases around? The whole point of migrations is to apply the changes you made in development to your production database in place.
Your steps should really be:
create a new virtualenv
deploy new code in new virtualenv
show a maintenance page
migrate new db
symlink current virtualenv to new venv
restart services
remove maintenance page
And the migration step doesn't take that long if there's no actual new migrations to run. It'll just run through each app saying it's already up to date.
If you're copying the database to have a backup, that's something that should be running anyways on cron or something, not as part of your deployment.
Actually, I'm confused on the creating a new virtualenv each time too. The normalized (read: most typical) deployment is:
deploy new code
migrate db
restart services
If you want to add back in the maintenance page stuff, you can, but the process takes only a minute or two total.