Revert failed cloud foundry deploy - database-migration

I'm automating app deployment to cloud foundry. So in the start command, I do a db migration. What can happen is that the migration would fail and as the result, the app will be dead. Is there some predefined strategy that can be used to rollback to the last working deployment, or I should manually store the last working version, check for failure and in that case redeploy the stored version?

The typical strategy used to deploy apps on Cloud Foundry is blue/green. This generally works like this:
Push the new app under a new name & host, like my-app-new.
Test the app & make sure it works.
When your satisfied, change the route mapping from the old app to the new app.
Delete the old app & optionally rename the new app.
Step #3 is where the cut-over happens. Prior to that all traffic keeps flowing to the old app.
This is documented more here.
https://docs.cloudfoundry.org/devguide/deploy-apps/blue-green.html
I'd say this often works well, but sometimes there are problems. Where this breaks down is with steps #1 & #2, if your app cannot have multiple instances of itself running or if migrations to your service are so different that the old app breaks when you update the database. It definitely helps if you keep this strategy in mind as you develop your app.
Aside from that, which has historically been the way to go, you could take a look at the new v3 API functionality. With v3, apps now retain multiple versions of a droplet. With this, you can rollback to a previous version of a droplet.
http://v3-apidocs.cloudfoundry.org/version/3.36.0/index.html#droplets
You can run cf v3-droplets to see the available droplets and cf v3-set-droplet to change the droplet being used.
That said, this will only rollback the droplet. It would not rollback a service like a database schema. If you need to do that, you'd need reverse migrations or perhaps even restore from a backup.
Hope that helps!

I work on very similar automation processes.
Daniel has explained the process very well. I think you're looking for the blue-green deployment methodology
1) Read up on blue green deploy here:
https://docs.cloudfoundry.org/devguide/deploy-apps/blue-green.html
2) Look at this plugin or implement blue green deploy manually:
https://github.com/contraband/autopilot
3) Blue-green restage plugin (a nice to have, in case you need to restage the app but not cause any downtime to the clients):
https://github.com/orange-cloudfoundry/cf-plugin-bg-restage
It works by creating a temporary app, copying the env vars/routes/code from the working app to he temp app.
The temp app now accepts traffic while the original app is being restaged.
the traffic moves on to the original app after it is restaged and the temporary app is deleted.

Related

Best way of deployment automation

I've made a pretty simple web application which has a REST API backend service written in Python/Django and a FE service written in JS/React. Both parts are containerized and can be launched locally via docker-compose. They are situated in separate github repositories, and every time a new tag is pushed to the corresponding repo, an image is built and pushed to the corresponding ecr repo via github actions. Until that point everything works smoothly, but the problem is that, I don't know how to properly automate the deployment process to the test and production environments. The goal is to have those environments as similar as possible.
My current solution for test env is to simply upload docker-compose file to the ec2 instance via github actions, then manually run the docker-compose command, which pulls images from ecr.
Even though it's a simple solution, it's not very scalable or automated and requires some work to be done in order to update an application. The desired flow is to have a manual GitHub action in each repository, which would deploy either BE or FE to the test or prod environment without any need to ssh into the server and do any other manipulations with docker.
I was looking at ECS, but it seems that it's a solution for bigger apps, which need several or more instances to run. I want my app to be used by many users, but I'm not sure, when it will happen. So maybe i should stick to something less complicated than ECS. Are there any simpler solutions, which i am missing? Like Elastic beanstack or something from any other provider?
I will be happy to receive a feedback on anything I wrote in this post, thanks!

Set different environments in AWS Amplify

I am just getting started with AWS Amplify and after some research, I am still unable to set up the environments structure I want. I have a Reactjs app which I want to host there, my plan is to have 3 environments:
Dev: this environment is to test new features. Every new branch I create is automatically deployed to this environment (no problem here, already implemented).
Staging: Once new features are merged into master branch I would like to have them deployed here. This should work as a pre-production environment.
Production: Once features in staging are tested, they should be released into Production with just 1 click (or an easy action). Also production should be always running with the latest released build of the project.
So, what's the problem exactly? So far I don't know how to have master pointing to 2 environments, meaning that it is either deployed in staging or production environment, and promoting from staging to production is rather tedious at the moment.
Is there any way to implement this workflow in Amplify? Thank you in advance for your help.

Create a new GCP project from existing

I created a Project on GCP. It has a postgres database, a node Appengine web app, and some other stuff. Now I am developing the app, and when everything is set up and running nicely I'd like to clone this project somehow and create a staging and a production environment/project.
So my project now is called dev-awesomeapp. Can I somehow make a staging-awesomeapp for staging and a awesomeapp for production from my existing dev-awesomeapp?
Edit: there is an other question from 2017 that asks the same thing, but maybe it's possible now after 2,5 years?
You can't, but if you don't want to configure everything form the beginning each time, you can use "architecture as code" with tools like deployment manager or Terraform.
This could help you in replicating your infrastructure, moreover it can be really helpful in automating any architectural changes if you use it in a CI/CD pipeline, making your release phase quicker and more reliable :)

How "cf push" works?

Applying cf push on existing Running application, stops and starts the application instance with new artifact.
This application has route name assigned.
1) In order to assess the downtime for a banking app, amidst cf push, what are the steps involved from stopping existing application instance to starting new application instance?
2) Does Blue-Green deployment decrease the downtime?
The steps done in a cf push are explained in the flowchart shown here This includes
creation of application
upload
staging
Blue green deployment eliminates downtimes caused by pushes of new application versions.
The basic workflow is described pretty well in the documentation. The basic idea is to deploy the new version side by side to the old one, assigning the route of the application to both of them, the remove the route from the old version. This way, there is no unavailability for the application route.
There is at least the CF CLI plugin blue-green-deploy, which will help you automate this workflow, so you do not have to take care of the single steps.

automate and streamline django deployment from local to server

Recently, I have started to deploy my work-in-progress django site from my local to server. But I have been doing it manually, which is ugly, unorganized, and error-prone.
I am looking for a way to automate and streamline the following deployment tasks:
Make sure all changes are committed and pushed to remote source repository (mercurial) and tag the release.
Deploy the release to the server (including any required 3rd-party apps missing from the server)
Apply the model changes to the database on the server
For 2), I have two further questions. Should the source of the deployment be my local env or the source repository? Do I need a differential or full deployment?
For 3), I use South in my local for applying model changes to database. Do I do the same on the server? If so, how do I apply multiple migrations at once?
I think Fabric is the defacto lightweight python deployment tool. http://docs.fabfile.org/en/1.3.4/index.html. It is very simple and will help you keep your deployment organized and streamlined. It allows for easy scp or rsync. Additionally it is easy to integrate with django tests.
For my smaller projects I just make the source of my deployments my local env. I checkout a clean copy and deploy from there. It would probably be better to integrate this with my version control for a quick rollback if there are any errors once I deploy.
I have never used south, but i'd imagine you could just write a fab command to sync your production server. If you're using south on dev, i couldn't imagine why you wouldn't want to use it on production too?