My architecture currently consists of a webapp built with Django and Webpack and deployed on Cloud Run. The build process consists of just a Dockerfile.
When someone from my team opens a new PR, we would like for deployment to commence automatically to a new instance. This is explained here (deployment previews). However, I would like for an ephemeral database to also be spawn - similarly to what happens in Heroku. This is necessary because otherwise conflicting Django's migrations may damage Stage's database (e.g., both me and a teammate pushing two conflicting migrations).
Related
I am just getting started with AWS Amplify and after some research, I am still unable to set up the environments structure I want. I have a Reactjs app which I want to host there, my plan is to have 3 environments:
Dev: this environment is to test new features. Every new branch I create is automatically deployed to this environment (no problem here, already implemented).
Staging: Once new features are merged into master branch I would like to have them deployed here. This should work as a pre-production environment.
Production: Once features in staging are tested, they should be released into Production with just 1 click (or an easy action). Also production should be always running with the latest released build of the project.
So, what's the problem exactly? So far I don't know how to have master pointing to 2 environments, meaning that it is either deployed in staging or production environment, and promoting from staging to production is rather tedious at the moment.
Is there any way to implement this workflow in Amplify? Thank you in advance for your help.
I'm automating app deployment to cloud foundry. So in the start command, I do a db migration. What can happen is that the migration would fail and as the result, the app will be dead. Is there some predefined strategy that can be used to rollback to the last working deployment, or I should manually store the last working version, check for failure and in that case redeploy the stored version?
The typical strategy used to deploy apps on Cloud Foundry is blue/green. This generally works like this:
Push the new app under a new name & host, like my-app-new.
Test the app & make sure it works.
When your satisfied, change the route mapping from the old app to the new app.
Delete the old app & optionally rename the new app.
Step #3 is where the cut-over happens. Prior to that all traffic keeps flowing to the old app.
This is documented more here.
https://docs.cloudfoundry.org/devguide/deploy-apps/blue-green.html
I'd say this often works well, but sometimes there are problems. Where this breaks down is with steps #1 & #2, if your app cannot have multiple instances of itself running or if migrations to your service are so different that the old app breaks when you update the database. It definitely helps if you keep this strategy in mind as you develop your app.
Aside from that, which has historically been the way to go, you could take a look at the new v3 API functionality. With v3, apps now retain multiple versions of a droplet. With this, you can rollback to a previous version of a droplet.
http://v3-apidocs.cloudfoundry.org/version/3.36.0/index.html#droplets
You can run cf v3-droplets to see the available droplets and cf v3-set-droplet to change the droplet being used.
That said, this will only rollback the droplet. It would not rollback a service like a database schema. If you need to do that, you'd need reverse migrations or perhaps even restore from a backup.
Hope that helps!
I work on very similar automation processes.
Daniel has explained the process very well. I think you're looking for the blue-green deployment methodology
1) Read up on blue green deploy here:
https://docs.cloudfoundry.org/devguide/deploy-apps/blue-green.html
2) Look at this plugin or implement blue green deploy manually:
https://github.com/contraband/autopilot
3) Blue-green restage plugin (a nice to have, in case you need to restage the app but not cause any downtime to the clients):
https://github.com/orange-cloudfoundry/cf-plugin-bg-restage
It works by creating a temporary app, copying the env vars/routes/code from the working app to he temp app.
The temp app now accepts traffic while the original app is being restaged.
the traffic moves on to the original app after it is restaged and the temporary app is deleted.
I have an Amazon EC2 instance that I'd like to use as a development server for client projects as well as run JIRA. I have a domain pointed to the EC2 server IP. I'm new to docker so unsure if my approach is correct.
I'd like to have a JIRA container installed (with another jiradb MYSQL container) running at jira.domain.com as well as the potential to host client staging websites at client.domain.com which point to the client's docker containers.
I've been trying to use This JIRA docker image using the provided command
docker run --detach --publish 8080:8080 cptactionhank/atlassian-jira:latest
but the container always stops running mid setup (set up takes a while in-between steps). When I run the container again it goes back to the start of setup.
Once I have JIRA set up how would I run it under a subdomain? And how could I then have client.domain.com point to a separate docker container?
Thanks in advance!
As you probably know there's two considerations for getting Jira setup, whether as server or container:
You need to enter a license key early in the setup process (and it requires an Internet connection for verification), even if it's an evaluation
By default Jira will use its built-in (H2, IIRC) database, unless you configure an external one
So, in the case of 2) you probably want to make sure you have your external database ready and set up.
See Connecting Jira applications to external databases for preparatory steps for a variety of databases.
You didn't mention at what stage your first setup run fails, however once you've gotten past step 1) or any further successful setup, one of the first things I did, so as not to lose all work I'd done, was to commit the container!
docker commit -a 'My Name' -m 'Jira configured and set up' <container ID> myrepo/myjira:mytag
That way you don't lose all your previous work and you save your container into a new image in one fell swoop.
Amazon Web Services offer a number of continuous deployment and management tools such as Elastic Beanstalk, OpsWorks, Cloud Formation and Code Deploy depending on your needs. The basic idea being to facilitate code deployment and upgrade with zero downtime. They also help manage best architectural practice using AWS resources.
For simplicity lets assuming a basic architecture where you have a 2 tear structure; a collection of application servers behind a load balancer and then a persistence layer using a multi-zone RDS DB.
The actual code upgrade across a fleet of instances (app servers) is easy to understand. For a very simplistic overview the AWS service upgrades each node in turn handing connections off so the instance in question is not being used.
However, I can't understand how DB upgrades are managed. Assume that we are going from version 1.0.0 to 2.0.0 of an application and that there is a requirement to change the DB structure. Normally you would use a script or a library like Flyway to perform the upgrade. However, if there is a fleet of servers to upgrade there is a point where both 1.0.0 and 2.0.0 applications exist across the fleet each requiring a different DB structure.
I need to understand how this is actually achieved (high level) to know what the best way/time of performing the DB migration is. I guess there are a couple of ways they could be achieving this but I am struggling to see how they can do it and allow both 1.0.0 and 2.0.0 to persist data without loss.
If they migrate the DB structure with the first app node upgrade and at the same time create a cached version of the 1.0.0. Users connected to the 1.0.0 app persist using the cached version of the DB and users connected to the 2.0.0 app persist to the new migrated DB. Once all the app nodes are migrated, the cached data is merged into the DB.
It seems unlikely they can do this as the merge would be pretty complex but I can't see another way. Any pointers/help would be appreciated.
This is a common problem to encounter once your application infrastructure gets into multiple application nodes. In the olden days, you could take your application offline for "maintenance windows" during which you could:
Replace application with a "System Maintenance, back soon" page.
Perform database migrations (schema and/or data)
Deploy new application code
Put application back online
In 2015, and really for several years this approach is not acceptable. Your users expect 24/7 operation, so there must be a better way. Of course there is, the answer is a series of patterns for Database Refactorings.
The basic concept to always keep in mind is to assume you have to maintain two concurrent versions of your application, and there can be no breaking changes between these two versions. This means that you have a current application (v1.0.0) currently in production and (v2.0.0) that is scheduled to be deployed. Both these versions must work on the same schema. Once v2.0.0 is fully deployed across all application servers, you can then develop v3.0.0 that allows you to complete any final database changes.
Recently, I have started to deploy my work-in-progress django site from my local to server. But I have been doing it manually, which is ugly, unorganized, and error-prone.
I am looking for a way to automate and streamline the following deployment tasks:
Make sure all changes are committed and pushed to remote source repository (mercurial) and tag the release.
Deploy the release to the server (including any required 3rd-party apps missing from the server)
Apply the model changes to the database on the server
For 2), I have two further questions. Should the source of the deployment be my local env or the source repository? Do I need a differential or full deployment?
For 3), I use South in my local for applying model changes to database. Do I do the same on the server? If so, how do I apply multiple migrations at once?
I think Fabric is the defacto lightweight python deployment tool. http://docs.fabfile.org/en/1.3.4/index.html. It is very simple and will help you keep your deployment organized and streamlined. It allows for easy scp or rsync. Additionally it is easy to integrate with django tests.
For my smaller projects I just make the source of my deployments my local env. I checkout a clean copy and deploy from there. It would probably be better to integrate this with my version control for a quick rollback if there are any errors once I deploy.
I have never used south, but i'd imagine you could just write a fab command to sync your production server. If you're using south on dev, i couldn't imagine why you wouldn't want to use it on production too?