We have a Clojure service that runs in a Docker Container running in Amazon ECS. When the container is deployed, the Clojure services connects to the database and always runs migrations on startup.
The problem is that if we need to rollback the code deploy, the deployed container has the old code in it, and does not have access to the rollback migrations that the latest container has.
This is a problem that doesn't happen often, but when it does, how do we perform the DB Rollback?
The best we can think of right now is to do it manually.
Anyone have experience doing this programatically?
It seems like you should consider separating your migrations from your actual deployable. Everyone will have their own preference for managing migrations but you lose flexibility when you package your migrations into your application. A dedicated migration tool can act more intelligently when on its own. For example, some database migrations are impossible to rollback without some sort of snapshot system, ex any migration that removes data. Additionally, its bad practice for your application to have the permissions necessary to perform migrations. You also cannot easily audit which user performed the migration.
Related
What is the recommended deployment strategy for running database migrations with ECS Fargate?
I could update the container command to run migrations before starting the gunicorn server. But this can result in concurrent migrations executing at the same time if more than one instance is provisioned.
I also have to consider the fact that images are already running. If I figure out how to run migrations before the new images are up and running, I have to consider the fact that the old images are still running on old code and may potentially break or cause strange data-corruption side effects.
I was thinking of creating a new ECS::TaskDefinition. Have that run a one-off migration script that runs the migrations. Then the container closes. And I update all of the other TaskDefinitions to have a DependsOn for it, so that they wont start until it finishes.
I could update the container command to run migrations before starting the gunicorn server. But this can result in concurrent migrations executing at the same time if more than one instance is provisioned.
That is one possible solution. To avoid the concurrency issue you would have to add some sort of distributed locking in your container script to grab a lock from DynamoDB or something before running migrations. I've seen it done this way.
Another option I would propose is running your Django migrations from an AWS CodeBuild task. You could either trigger it manually before deployments, or automatically before a deployment as part of a larger CI/CD deployment pipeline. That way you would at least not have to worry about more than one running at a time.
I also have to consider the fact that images are already running. If I figure out how to run migrations before the new images are up and running, I have to consider the fact that the old images are still running on old code and may potentially break or cause strange data-corruption side effects.
That's a problem with every database migration in every system that has ever been created. If you are very worried about it you would have to do blue-green deployments with separate databases to avoid this issue. Or you could just accept some down-time during deployments by configuring ECS to stop all old tasks before starting new ones.
I was thinking of creating a new ECS::TaskDefinition. Have that run a one-off migration script that runs the migrations. Then the container closes. And I update all of the other TaskDefinitions to have a DependsOn for it, so that they wont start until it finishes.
This is a good idea, but I'm not aware of any way to set DependsOn for separate tasks. The only DependsOn setting I'm aware of in ECS is for multiple containers in a single task.
What are the best practices for managing django schema/data migrations in a containerized deployment? We are having issues with multiple containers attempting to run the migrate command on deploy and running our migrations in parallel. I was surprised to learn that django doesn't coordinate this internally to prevent the migrations from being run by two containers at the same time. We are using AWS ECS and struggling to automatically identify one container as the master node from which to run our migrations.
I've run into this exact problem in the past, and I was also surprised to find that Django doesn't use a database lock to manage this like other DB migration tools do.
My solution (inspired by Terraform's state lock mechanism) was to create a DynamoDB table to use for distributed locks. I'm using boto3 to perform a write to the lock table, with a ConditionExpression that the lock doesn't already exist, (with a while loop to wait if the lock does exist), then I call the Django migrate command, then I delete the lock.
I bundled that logic into a Django management command, and the startup script in my docker image I deploy to ECS runs this command before it starts the Django app.
We've ran into an issue where we have the db backed up, but migrations got out of sorts and as a result there are a lot of GraphQL queries in our frontend code that don't match up to the db relationships at all.
I'm new to the project, but it looks like people were just making changes in the Hasura console instead of via the CLI and committing migrations.
I'm going through and recreating relationships manually so they match up with the GraphQL queries in the frontend, but moving forward I'd like to ensure this doesn't happen again.
We'd also prefer to move everything from our Docker image on Heroku to Hasura Cloud if possible.
My question is:
Is there a standardized pattern for ensuring the db data, the db
schema, and the Hasura [preferably Hasura Cloud] metadata are all
version controlled?
Moreover, is there a way to enforce that pattern so other devs can't simply tweak things in Hasura Console and everything gets out of sync again. 😬
Thank you so much in advance if you can help. 🙇♂️
https://hasura.io/blog/moving-from-local-development-staging-production-with-hasura/ is a good place to start.
I’d strongly encourage following that. Running Hasura locally through Docker is really easy to set up and hasura console will give you access to a localhost console that syncs your changes with metadata/migration files in your local repo. From there, just commit, review, merge, and take advantage of Hasura Cloud’s GitHub deploy if you can. If not, hasura deploy and a few environment variables are really all you need to roll out changes.
As far as preventing devs from tweaking things in the console, if you’re talking about some deployed shared environment then honestly I think access to the console should be limited.
So, I often am working with projects where there will be multiple dockerized django application servers behind a loadbalancer, and frequently I will need to deploy to them. I frequently use Watchtower to do pull-based deploys. I build a new image, push it to dockerhub, and watchtower is responsible for pulling those images down to the servers and replacing the running containers. That all works pretty well.
I would like to start automating the running of django migrations. One way I could accomplish this would be to simply add a run of the manage.py migrate to the entrypoint, and have every container automatically attempt a migration when the container comes online. This would work, and it would avoid the hassle of needing to come up with a way to do a lockout or leader election; but without some sort of way to prevent multiple runs, there is a risk that multiple instances of the migration could run at the same time. If I went this route, is there any chance that multiple migrations running at the same time could cause problems? Should I be looking for some other way to kick off these migrations once and only once?
Running migrations at the same time in parallel is not safe. I've tested it by running the migrate command in parallel on a data migration, and Django will run the migration twice. It will add 2 rows to the django_migrations table.
Check out this Google Groups post discussing the issue
And this article about running migrations on container statup.
I am facing a challenge here. So I inhertied the models from previous developers and the tables were not properly built. I added some constraints and new tables in order to normalize those tables. Before pushing the application to the heroku I tested it on my local machine and it actually broke my database.
Now the heroku website is already in production, so there are user information. How should i approach this, do I need to destroy the existing database and create a new one and run the migrations
Be very, very careful. Applying migrations on production servers can cause irreversible damage if you are not careful, and so you should be prepared for every possible situation.
My best recommendation would be to create an entire duplicate copy of your live DB (using Heroku this is as simple as a PG dump/backup). You can then create a new staging site using the same code, upload the backup into a new Database instance, and then test against that. Live environments are not always the same as local ones. You can then run your migrations on the staging site, and see if there are any unexpected effects (the best way to do this would be by utilizing django test cases). If there are any issues, be sure to understand how the rollback process works with django migrations.
A good tutorial that is fairly recent can be found here: https://realpython.com/django-migrations-a-primer/