I am just getting started with AWS Amplify and after some research, I am still unable to set up the environments structure I want. I have a Reactjs app which I want to host there, my plan is to have 3 environments:
Dev: this environment is to test new features. Every new branch I create is automatically deployed to this environment (no problem here, already implemented).
Staging: Once new features are merged into master branch I would like to have them deployed here. This should work as a pre-production environment.
Production: Once features in staging are tested, they should be released into Production with just 1 click (or an easy action). Also production should be always running with the latest released build of the project.
So, what's the problem exactly? So far I don't know how to have master pointing to 2 environments, meaning that it is either deployed in staging or production environment, and promoting from staging to production is rather tedious at the moment.
Is there any way to implement this workflow in Amplify? Thank you in advance for your help.
Related
I created a Project on GCP. It has a postgres database, a node Appengine web app, and some other stuff. Now I am developing the app, and when everything is set up and running nicely I'd like to clone this project somehow and create a staging and a production environment/project.
So my project now is called dev-awesomeapp. Can I somehow make a staging-awesomeapp for staging and a awesomeapp for production from my existing dev-awesomeapp?
Edit: there is an other question from 2017 that asks the same thing, but maybe it's possible now after 2,5 years?
You can't, but if you don't want to configure everything form the beginning each time, you can use "architecture as code" with tools like deployment manager or Terraform.
This could help you in replicating your infrastructure, moreover it can be really helpful in automating any architectural changes if you use it in a CI/CD pipeline, making your release phase quicker and more reliable :)
I'm automating app deployment to cloud foundry. So in the start command, I do a db migration. What can happen is that the migration would fail and as the result, the app will be dead. Is there some predefined strategy that can be used to rollback to the last working deployment, or I should manually store the last working version, check for failure and in that case redeploy the stored version?
The typical strategy used to deploy apps on Cloud Foundry is blue/green. This generally works like this:
Push the new app under a new name & host, like my-app-new.
Test the app & make sure it works.
When your satisfied, change the route mapping from the old app to the new app.
Delete the old app & optionally rename the new app.
Step #3 is where the cut-over happens. Prior to that all traffic keeps flowing to the old app.
This is documented more here.
https://docs.cloudfoundry.org/devguide/deploy-apps/blue-green.html
I'd say this often works well, but sometimes there are problems. Where this breaks down is with steps #1 & #2, if your app cannot have multiple instances of itself running or if migrations to your service are so different that the old app breaks when you update the database. It definitely helps if you keep this strategy in mind as you develop your app.
Aside from that, which has historically been the way to go, you could take a look at the new v3 API functionality. With v3, apps now retain multiple versions of a droplet. With this, you can rollback to a previous version of a droplet.
http://v3-apidocs.cloudfoundry.org/version/3.36.0/index.html#droplets
You can run cf v3-droplets to see the available droplets and cf v3-set-droplet to change the droplet being used.
That said, this will only rollback the droplet. It would not rollback a service like a database schema. If you need to do that, you'd need reverse migrations or perhaps even restore from a backup.
Hope that helps!
I work on very similar automation processes.
Daniel has explained the process very well. I think you're looking for the blue-green deployment methodology
1) Read up on blue green deploy here:
https://docs.cloudfoundry.org/devguide/deploy-apps/blue-green.html
2) Look at this plugin or implement blue green deploy manually:
https://github.com/contraband/autopilot
3) Blue-green restage plugin (a nice to have, in case you need to restage the app but not cause any downtime to the clients):
https://github.com/orange-cloudfoundry/cf-plugin-bg-restage
It works by creating a temporary app, copying the env vars/routes/code from the working app to he temp app.
The temp app now accepts traffic while the original app is being restaged.
the traffic moves on to the original app after it is restaged and the temporary app is deleted.
We are a relatively inexperienced development team trying to do things 'the right way'. We are using Github along with AWS and CodeDeploy for multiple PHP based web applications. We are utilising Github's auto-deployment with CodeDeploy when the master branch is updated.
We have two production EC2 web servers in separate AZ's along with a single EC2 staging server.
It currently works as follows:
We write code in a branch, we push to GitHub, we merge into 'master' which then kicks off CodeDeploy to write to our staging server where we can test it. Once we have tested it we then manually kick off CodeDeploy to write to production (with the same commit ID).
The problem is, if testing brings up issues, and we have another branch waiting to be merged and tested, everything becomes backed up?
We are obviously doing something wrong. We are writing to the master branch to utilise GitHub's autodeploy, but I assumed master was only to be written to when it was ready to be deployed?
Can someone please help us and put us straight?
Thanks
Make another branch called 'livecandidate' this branch will have each of the new feature branches merged into it
Each time a feature branch is merged into 'livecandidate' pull 'livecandidate' into your Code Deploy process and install to the test machine.
If the tests pass then merge 'livecandidate' into 'master' and kick off the install to production
If the tests do not pass then unwind the merge into 'livecandidate' (assuming no dependencies on chains of changes etc)
After doing a production install or a un-merge, try the next feature
General idea is to never ever have a broken master
All problems in computer science can be solved by another level of indirection - David Wheeler
I am searching for a solution to do continuous deployment in a cloud environment, more specific, in an Amazon AWS environment.
The code to be deployed are mainly Microsoft's ASP and PHP, so this framework should work on both platforms. As I have an auto-scale environment, this framework will work if it pulls the new code, like Puppet does.
My first thought was to deploy direct from the VCS, but I ended in a problem where all repository information was mirrored to the servers, as GIT, for instance, works. This is a problem because the repository keeps growing and the servers will demand more and more space.
I found Ansible, that works the way I need, but does not work on Windows environment. It only sends to the servers the production code, not the VCS repository, and keeps track which servers are updated.
Without using an easy-to-setup framework like this, I will need to create a Puppet + Jenkins + a VCS framework, where Jenkins creates the package from a VCS source code and Puppet delivers it.
Does anybody know any small framework for my needs or the Puppet + Jenkins + VCS is the way to go?
Consider CloudMunch (www.cloudmunch.com) for this. The platform is built exactly to solve this kind of polyglot requirements.
Disclaimer: I work for CloudMunch
Recently, I have started to deploy my work-in-progress django site from my local to server. But I have been doing it manually, which is ugly, unorganized, and error-prone.
I am looking for a way to automate and streamline the following deployment tasks:
Make sure all changes are committed and pushed to remote source repository (mercurial) and tag the release.
Deploy the release to the server (including any required 3rd-party apps missing from the server)
Apply the model changes to the database on the server
For 2), I have two further questions. Should the source of the deployment be my local env or the source repository? Do I need a differential or full deployment?
For 3), I use South in my local for applying model changes to database. Do I do the same on the server? If so, how do I apply multiple migrations at once?
I think Fabric is the defacto lightweight python deployment tool. http://docs.fabfile.org/en/1.3.4/index.html. It is very simple and will help you keep your deployment organized and streamlined. It allows for easy scp or rsync. Additionally it is easy to integrate with django tests.
For my smaller projects I just make the source of my deployments my local env. I checkout a clean copy and deploy from there. It would probably be better to integrate this with my version control for a quick rollback if there are any errors once I deploy.
I have never used south, but i'd imagine you could just write a fab command to sync your production server. If you're using south on dev, i couldn't imagine why you wouldn't want to use it on production too?