How "cf push" works? - cloud-foundry

Applying cf push on existing Running application, stops and starts the application instance with new artifact.
This application has route name assigned.
1) In order to assess the downtime for a banking app, amidst cf push, what are the steps involved from stopping existing application instance to starting new application instance?
2) Does Blue-Green deployment decrease the downtime?

The steps done in a cf push are explained in the flowchart shown here This includes
creation of application
upload
staging
Blue green deployment eliminates downtimes caused by pushes of new application versions.
The basic workflow is described pretty well in the documentation. The basic idea is to deploy the new version side by side to the old one, assigning the route of the application to both of them, the remove the route from the old version. This way, there is no unavailability for the application route.
There is at least the CF CLI plugin blue-green-deploy, which will help you automate this workflow, so you do not have to take care of the single steps.

Related

GCP Build with Google Run always create a new service why?

Hi have a basic CI/CD setup
Whenever I push new code on my GitHub repository (trigger).
It should push it via Google Build to Google Run
But currently, whenever the trigger is initiated, it creates a new service on my cloud account.
Now, I am not sure why it's happening!
What I want is to update the current running service on Google Run
or
Create a new service, migrate 100% traffic to it and delete the old one.
This is my Deployment screen looks like. I have highlighted the section which I believe might be the reason for this duplicate services.
When you deploy a new version of your service (a new version can be a new image, or the same image with different parameters (concurrency, min/max instance, CPU, env vars,...)), it's named a revision. There is no limit on the number of revision (in fact yes, you have a limit at 1000, then the oldest will be deleted when a new one is created).
The pricing model of Cloud Run (and other serverless product) is simple: you pay when your service is running. In your case, you haven't a min instance parameter, and therefore your revision will run (and you will start to pay) when request invokes your service (and you pay only when you go over the free tier)
Same thing for Cloud Build: you start to pay when you use it. And you use it every time you push your code on GitHub (that invoke a Cloud Build trigger and then deploy your code on Cloud Run). Here again, you have a comfortable free tiers of 120 minutes free per day for Cloud Build.

How to manage automatic deployment to ECS using Terraform Cloud and CircleCI?

I have an ECS task which has 2 containers using 2 different images, both hosted in ECR. There are 2 GitHub repos for the two images (app and api), and a third repo for my IaC code (infra). I am managing my AWS infrastructure using Terraform Cloud. The ECS task definition is defined there using Cloudposse's ecs-alb-service-task, with the containers defined using ecs-container-definition. Presently I'm using latest as the image tag in the task definition defined in Terraform.
I am using CircleCI to build the Docker containers when I push changes to GitHub. I am tagging each image with latest and the variable ${CIRCLE_SHA1}. Both repos also update the task definition using the aws-ecs orb's deploy-service-update job, setting the tag used by each container image to the SHA1 (not latest). Example:
container-image-name-updates: "container=api,tag=${CIRCLE_SHA1}"
When I push code to the repo for e.g. api, a new version of the task definition is created, the service's version is updated, and the existing task is restarted using the new version. So far so good.
The problem is that when I update the infrastructure with Terraform, the service isn't behaving as I would expect. The ecs-alb-service-task has a boolean called ignore_changes_task_definition, which is true by default.
When I leave it as true, Terraform Cloud successfully creates a new version whenever I Apply changes to the task definition. (A recent example was to update environment variables.) BUT it doesn't update the version used by the service, so the service carries on using the old version. Even if I stop a task, it will respawn using the old version. I have to manually go in and use the Update flow, or push changes to one of the code repos. Then CircleCI will create yet aother version of the task definition and update the service.
If I instead set this to false, Terraform Cloud will undo the changes to the service performed by CircleCI. It will reset the task definition version to the last version it created itself!
So I have three questions:
How can I get Terraform to play nice with the task definitions created by CircleCI, while also updating the service itself if I ever change it via Terraform?
Is it a problem to be making changes to the task definition from THREE different places?
Is it a problem that the image tag is latest in Terraform (because I don't know what the SHA1 is)?
I'd really appreciate some guidance on how to properly set up this CI flow. I have found next to nothing online about how to use Terraform Cloud with CI products.
I have learned a bit more about this problem. It seems like the right solution is to use a CircleCI workflow to manage Terraform Cloud, instead of having the two services effectively competing with each other. By default Terraform Cloud will expect you to link a repo with it and it will auto-plan every time you push. But you can turn that off and use the terraform orb instead to run plan/apply via CircleCI.
You would still leave ignore_changes_task_definition set to true. Instead, you'd add another step to the workflow after the terraform/apply step has made the change. This would be aws-ecs/run-task, which should relaunch the service using the most recent task definition, which was (possibly) just created by the previous step. (See the task-definition parameter.)
I have decided that this isn't worth the effort for me, at least not at this time. The conflict between Terraform Cloud and CircleCI is annoying, but isn't that acute.

best practice for bitbucket pipeline deployment in AWS to live server

I am on a project which is about to release first version. I want to setup bitbucket pipeline when deploying to AWS. When doing so, I am afraid that users on website might be affected while we are deploying. What is the best practice for deploying new feature to the live server without affecting users on the website?
One possible option might be that put maintenance page on the web and deploy new codes when not many users are using the website. is there other way to deploy?
As mentioned in the comment it something that depends on underlying tools and technology, but I will focus on your last question.
One possible option might be that put maintenance page on the web and
deploy new codes when not many users are using the website. is there
other way to deploy?
First thing, you should not deploy a new feature without proper testing as pipeline must include automating testing, as sometimes such code breaks the complete application.
You should not put application under maintenance during deployment, that is why we have CI/CD pipeline. You should design your pipeline in the way that you are sure about the lastest code and feature that It should work in production as expected. Many AWS services support blue/green deployment and in the interesting part of blue/green deployment is rollback. You can explore further in the below links.
AWS_Blue_Green_Deployments
using-bitbucket-pipeline-for-aws-ecs-deployments
deploy-to-ec2-with-aws-codedeploy-from-bitbucket-pipelines
continuous-deployment-pipeline

Difference between cf push and cf restage

I have a app which is pushed to cf using cf push. Now I have changed the one of the environment variable and then restage the app using cf restage. What I understand is when we do a restage it will again compile the droplet and build it same can be done with cf push again for the app. So what I want to know is the difference between the 2 commands and how internally cf handles this?
The difference is that one uploads files and one does not.
When you run cf push, the cli is going to take bits off your local file system, upload them, stage your app and if that succeeds, run your app. This is what you want if you have made changes to files in your app and want to deploy them.
When you cf restage, the cli is not going to upload anything. It will just restage your app, and if that succeeds, run the app. This is what you want if there are no app change or you do not have the app source code, yet you want to force the build packs to run again & restart your app using the new droplet.
When you cf restart, the cli won't upload or restage, it will just stop and start the app. This is the fastest option, but only works if you just need to pick up environment changes like a changed service, memory limit or environment variables. It's also good, if you just want to try and have your app placed onto a different Diego Cell.
If you are just making changes to environment variables, you can probably get away with a cf restart, unless those environment variables are being used by one of your buildpacks, like JBP_CONFIG_*.
Hope that helps!
Regarding
So what I want to know is the difference between the 2 commands
cf push internally does the following sequence
uploading : creating a package from your app
staging : creating a container image using the package
starting : starting the container as a long running process, based on the container image
Some use cases where we need to restage
To introduce buildpack changes : CF operators update the buildpacks for various reasons (one example: security patches) effectively updating the runtime dependencies which our apps use.
Introduction of new software components : It might be possible that we need to introduce new / additional components which were not foreseen at the time of initial push. An example can be : introduction of monitoring agent for your app
When the root file system is changed.
In all the 3 scenarios above, our application code is not changing but we need to re-create the container image. In these cases we need to restage. If we compare with cf push , in case of restage we skip the package creation step.

Revert failed cloud foundry deploy

I'm automating app deployment to cloud foundry. So in the start command, I do a db migration. What can happen is that the migration would fail and as the result, the app will be dead. Is there some predefined strategy that can be used to rollback to the last working deployment, or I should manually store the last working version, check for failure and in that case redeploy the stored version?
The typical strategy used to deploy apps on Cloud Foundry is blue/green. This generally works like this:
Push the new app under a new name & host, like my-app-new.
Test the app & make sure it works.
When your satisfied, change the route mapping from the old app to the new app.
Delete the old app & optionally rename the new app.
Step #3 is where the cut-over happens. Prior to that all traffic keeps flowing to the old app.
This is documented more here.
https://docs.cloudfoundry.org/devguide/deploy-apps/blue-green.html
I'd say this often works well, but sometimes there are problems. Where this breaks down is with steps #1 & #2, if your app cannot have multiple instances of itself running or if migrations to your service are so different that the old app breaks when you update the database. It definitely helps if you keep this strategy in mind as you develop your app.
Aside from that, which has historically been the way to go, you could take a look at the new v3 API functionality. With v3, apps now retain multiple versions of a droplet. With this, you can rollback to a previous version of a droplet.
http://v3-apidocs.cloudfoundry.org/version/3.36.0/index.html#droplets
You can run cf v3-droplets to see the available droplets and cf v3-set-droplet to change the droplet being used.
That said, this will only rollback the droplet. It would not rollback a service like a database schema. If you need to do that, you'd need reverse migrations or perhaps even restore from a backup.
Hope that helps!
I work on very similar automation processes.
Daniel has explained the process very well. I think you're looking for the blue-green deployment methodology
1) Read up on blue green deploy here:
https://docs.cloudfoundry.org/devguide/deploy-apps/blue-green.html
2) Look at this plugin or implement blue green deploy manually:
https://github.com/contraband/autopilot
3) Blue-green restage plugin (a nice to have, in case you need to restage the app but not cause any downtime to the clients):
https://github.com/orange-cloudfoundry/cf-plugin-bg-restage
It works by creating a temporary app, copying the env vars/routes/code from the working app to he temp app.
The temp app now accepts traffic while the original app is being restaged.
the traffic moves on to the original app after it is restaged and the temporary app is deleted.