Automate Apigee API Proxy Deployment in CI/CD Pipeline - google-cloud-platform

Is there any recommended method to create and deploy the Apigee API Proxy Bundle via a CI/CD pipeline (I'm using Azure DevOps)?
I want to avoid excessive API Proxy Bundles from being created and deployed when there are no changes to be made. I've already tested, and I see that identical bundles still create a new revision.
So far, my own solution is to write a PowerShell script to use apigeecli to download the current bundle and compare it against the apiproxy that I have locally in my repo. If it differs, I create and deploy a new API Proxy Bundle.
Has anyone seen anything better?

I have mainly automated with Gitlab but will share my ideas probably may help with your specific case.
So we use version control to manage our apigee repos. I have setup a gitlab pipeline that checks for the diff anytime we push to our repository and only if there are any changes do we redeploy the proxy to Apigee. Normally when the pipeline is triggered, we check if there are any changes to target servers, proxies and shared flows, and if changes are detected, we check the deployed revision and environments.
Through my deployment script, i am able to get a list of these changes and pass them to the pipeline as CHANGES variable. This means that only these modified proxies will be deployed.
On my pipeline I could do something like this git diff --name-only $CI_COMMIT_SHA..$CI_COMMIT_BEFORE_SHA > /changes.txt and pass the content of the changes file to the CHANGES to be deployed.

Related

How to manage automatic deployment to ECS using Terraform Cloud and CircleCI?

I have an ECS task which has 2 containers using 2 different images, both hosted in ECR. There are 2 GitHub repos for the two images (app and api), and a third repo for my IaC code (infra). I am managing my AWS infrastructure using Terraform Cloud. The ECS task definition is defined there using Cloudposse's ecs-alb-service-task, with the containers defined using ecs-container-definition. Presently I'm using latest as the image tag in the task definition defined in Terraform.
I am using CircleCI to build the Docker containers when I push changes to GitHub. I am tagging each image with latest and the variable ${CIRCLE_SHA1}. Both repos also update the task definition using the aws-ecs orb's deploy-service-update job, setting the tag used by each container image to the SHA1 (not latest). Example:
container-image-name-updates: "container=api,tag=${CIRCLE_SHA1}"
When I push code to the repo for e.g. api, a new version of the task definition is created, the service's version is updated, and the existing task is restarted using the new version. So far so good.
The problem is that when I update the infrastructure with Terraform, the service isn't behaving as I would expect. The ecs-alb-service-task has a boolean called ignore_changes_task_definition, which is true by default.
When I leave it as true, Terraform Cloud successfully creates a new version whenever I Apply changes to the task definition. (A recent example was to update environment variables.) BUT it doesn't update the version used by the service, so the service carries on using the old version. Even if I stop a task, it will respawn using the old version. I have to manually go in and use the Update flow, or push changes to one of the code repos. Then CircleCI will create yet aother version of the task definition and update the service.
If I instead set this to false, Terraform Cloud will undo the changes to the service performed by CircleCI. It will reset the task definition version to the last version it created itself!
So I have three questions:
How can I get Terraform to play nice with the task definitions created by CircleCI, while also updating the service itself if I ever change it via Terraform?
Is it a problem to be making changes to the task definition from THREE different places?
Is it a problem that the image tag is latest in Terraform (because I don't know what the SHA1 is)?
I'd really appreciate some guidance on how to properly set up this CI flow. I have found next to nothing online about how to use Terraform Cloud with CI products.
I have learned a bit more about this problem. It seems like the right solution is to use a CircleCI workflow to manage Terraform Cloud, instead of having the two services effectively competing with each other. By default Terraform Cloud will expect you to link a repo with it and it will auto-plan every time you push. But you can turn that off and use the terraform orb instead to run plan/apply via CircleCI.
You would still leave ignore_changes_task_definition set to true. Instead, you'd add another step to the workflow after the terraform/apply step has made the change. This would be aws-ecs/run-task, which should relaunch the service using the most recent task definition, which was (possibly) just created by the previous step. (See the task-definition parameter.)
I have decided that this isn't worth the effort for me, at least not at this time. The conflict between Terraform Cloud and CircleCI is annoying, but isn't that acute.

An AWS CI/CD Pipeline that allows manual deploy by commit

Background
I want to create the following CI/CD flow in AWS and Github, for a react app using Amplify:
A single main branch, with short-lived feature branches and PRs into main.
Each PR triggers its own test environment in Amplify, with its own temporary subdomain, which gets torn down when the PR is merged, as described here.
Merging into main does not automatically trigger a deploy to production.
Instead, there is a separate mechanism (a web page, or amplify command, or even triggers based on git tags) for manually selecting a commit from main to deploy to production.
Questions
It's not clear to me if...
Support for this flow is already built into Amplify (based on the docs I've read, I think the answer is "no", but I'm not sure).
Support for this flow is already built into AWS CodePipeline, or if it can be configured there.
There is another AWS tool that solves this.
I'm looking for answers to those questions, or specific references in the docs which address them.
The answers for Amplify are Yes, Yes, Yes, Partially.
(1) A single main branch, with short-lived feature branches and PRs into main.
Yes. Feature branch deploys. Can define which branch patterns, such as feature*/, you wish to auto-deploy.
(2) Each PR triggers its own test environment in Amplify, with its own temporary subdomain,
Yes. Web Previews for PRs. "A web preview deploys every pull request made to your GitHub repository to a unique preview URL which is completely different from the URL your main site uses."
(3) Merging into main does not automatically trigger a deploy to production.
Yes. Disable automatic builds on main.
(4) Instead, there is a separate mechanism ... for manually selecting a commit from main to deploy to production.
Partially (HEAD only?). Call the StartJob API to manually trigger a build from, say, Lambda. The job type RELEASE starts a new job with the latest change from the specified branch. I am not sure if jobType: MANUAL with a commitId starts a job from an arbitrary commit hash.
Another workaround for 3+4 is to skip the build for an arbitrary commit. Amplify will skip building if [skip-cd] appears at the end of a commit message.
In my experience, I don't think there is any easy way to meet your requirement.
If you are using Gitlab, you can try Gitlab Review Apps to achieve that (I tried before with some scripts)
Support for this flow is already built into Amplify (based on the docs I've read, I think the answer is "no", but I'm not sure).
Check below links, if this help:
https://www.youtube.com/watch?v=QV2WS535nyI
https://dev.to/rajandmr/deploying-react-app-using-aws-amplify-with-ci-cd-pipeline-setup-3lid
Support for this flow is already built into AWS CodePipeline, or if it can be configured there.
For this, you need to create a full your own pipeline. Yes, you can configure your pipeline.
There is another AWS tool that solves this.
If you are okay with Jenkins, then Jenkins will help you to achieve this.
You can deploy Jenkins docker in AWS EC2 and create your pipeline. You can also use the parameterised option for selecting your environment and git branch.

Best way of deployment automation

I've made a pretty simple web application which has a REST API backend service written in Python/Django and a FE service written in JS/React. Both parts are containerized and can be launched locally via docker-compose. They are situated in separate github repositories, and every time a new tag is pushed to the corresponding repo, an image is built and pushed to the corresponding ecr repo via github actions. Until that point everything works smoothly, but the problem is that, I don't know how to properly automate the deployment process to the test and production environments. The goal is to have those environments as similar as possible.
My current solution for test env is to simply upload docker-compose file to the ec2 instance via github actions, then manually run the docker-compose command, which pulls images from ecr.
Even though it's a simple solution, it's not very scalable or automated and requires some work to be done in order to update an application. The desired flow is to have a manual GitHub action in each repository, which would deploy either BE or FE to the test or prod environment without any need to ssh into the server and do any other manipulations with docker.
I was looking at ECS, but it seems that it's a solution for bigger apps, which need several or more instances to run. I want my app to be used by many users, but I'm not sure, when it will happen. So maybe i should stick to something less complicated than ECS. Are there any simpler solutions, which i am missing? Like Elastic beanstack or something from any other provider?
I will be happy to receive a feedback on anything I wrote in this post, thanks!

best practice for bitbucket pipeline deployment in AWS to live server

I am on a project which is about to release first version. I want to setup bitbucket pipeline when deploying to AWS. When doing so, I am afraid that users on website might be affected while we are deploying. What is the best practice for deploying new feature to the live server without affecting users on the website?
One possible option might be that put maintenance page on the web and deploy new codes when not many users are using the website. is there other way to deploy?
As mentioned in the comment it something that depends on underlying tools and technology, but I will focus on your last question.
One possible option might be that put maintenance page on the web and
deploy new codes when not many users are using the website. is there
other way to deploy?
First thing, you should not deploy a new feature without proper testing as pipeline must include automating testing, as sometimes such code breaks the complete application.
You should not put application under maintenance during deployment, that is why we have CI/CD pipeline. You should design your pipeline in the way that you are sure about the lastest code and feature that It should work in production as expected. Many AWS services support blue/green deployment and in the interesting part of blue/green deployment is rollback. You can explore further in the below links.
AWS_Blue_Green_Deployments
using-bitbucket-pipeline-for-aws-ecs-deployments
deploy-to-ec2-with-aws-codedeploy-from-bitbucket-pipelines
continuous-deployment-pipeline

Cloud Foundry Change Buildpack from Command Line

I have a Jenkins app running on Cloud Foundry for a POC. Since it's Jenkins it uses a bound service for file persistence.
I had to make a change to the Java Buildpack and would like Jenkins to use the updated buildpack.
I could pull the source for Jenkins from GitHub and push it again with updated references to the new build pack in the manifest.yml file or via a command line option. In theory, the bound file system service's state would remain intact. However, I haven't validated this assumption and have concerns I might loose the state.
I've looked through the client CLI to see if there's a way to explicitly swap buildpacks without another push. However, I didn't see anything.
Is anyone aware of a way to change the buildpack of an existing application without re-pushing it to Cloud Foundry?
After some research I couldn't find anyway to swap the buildpack without a push. I did discover that my bound file system service remained intact and didn't loose any work.
Answer: re-push to change the buildpack.