Cloudformation Multi-Step Stack Deployment - amazon-web-services

We've built our entire platform using Cloudformation stacks and have dozens of services that follow the same deployment pattern:
Each service is broken out into two stacks. The first stack provisions an ECR repo for the app image and CodeBuild project to build the image. The second stack provisions the services using the $latest image from the previously provisioned ECR repo.
Before the ECR repo can have an image, the first stack needs to be deployed, and the CodeBuild project must be ran. Only once this is done can the second stack be deployed (otherwise the service cannot be provisioned because there is no starter image to use).
Having it broken out into these parts also gives us the ability to provision the same service with different configurations. Say I have an image processing service that creates image thumbnails. I want to configure that service differently if it is meant to handle real-time requests from users vs if it is a service that handles batch jobs. So my ImageProcessingService may be split into a UserImageProcessingService and BatchImageProcessingService but both services will use the shared ImageProcessingServiceRepo from ECR.
As a result, redeploying the same service to- say- another region cannot be easily automated without custom tools.
My question is:
How are you meant to automate deployment of Cloudformation stacks when multiple steps, especially steps involving tasks like submitting a CodeBuild job, are needed?
Is there a better way to structure my Cloudformation stacks to achieve my goal or is it not possible to do what I need to do without something like Teraform?

Related

How to update AWS Fargate service outside AWS code deploy in order to change desired task count

When set up AWS code deploy to deploy an AWS service we have to provide 2 target groups lets say
TargetGroupBlue and TargetGroupGreen.
In the cloudformation template we use the TargetGroupBlue when linking the Service to Loadbalancer.
TargetGroupGreen is created only to be used by AWS during code deploy.
Step 1 : We executed create stack command in order to create the service and loadbalancer. We have a workable service now. Traffic is routed via TargetGroupBlue.
Step 2 : Then use code deploy to do another deploy which will the swap the target group to TargetGroupGreen once done.
Step 3 : Now we need to update the desired task count in service so use cloudformation update stack command. This fails because the targetgroup is TargetGroupGreen (as Code deploy changed it in step 2) and out cloud formation templates has used TargetGroupBlue for linking the service to Loadbalancer.
The workaround could be do all service related updates outside code deploy in a even numbered release (so always have to do code deploy twice so that we know traffic is always routed TargetGroupBlue)
Is this the way we should work with service updates via cloudformation and Code Deploy?
Please help to get this figured out.
Even though AWS provides many cool ways to work with when it comes to BlueGreen deploys with CodeDeploy or CloudFormation it really sucks.
The work around they suggested was to use Custom Resources in cloudformation which will actually trigger a lambda function to get the services updated cheating the cloudformation stack updates. Sample.
But there are no proper samples to do that so it would take lot of time to get it to work the way you need.
Furthermore, the cloudforamtion with hooks does not really work for bigger projects as the LBs cannot be shared.
So here is the open ticket, please help to put a thumbs up so the AWS will prioritize this in their roadmap.
https://github.com/aws-cloudformation/aws-cloudformation-coverage-roadmap/issues/483

AWS ECS run latest task definition

I am trying to have run the lastest task definition image built from GitHub deployment (CD). Seems like on AWS it creates a task definition for example "task-api:1", "task-api:2", on was my cluster is still running task-api: 1 even though there is the latest task as a new image has been built. So far I have to manually stop the old one and start a new one . How can I have it automated?
You must wrap your tasks in a service and use rolling updates for automated deployments.
When the rolling update (ECS) deployment type is used for your service, when a new service deployment is started the Amazon ECS service scheduler replaces the currently running tasks with new tasks.
Read: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/deployment-type-ecs.html
This is DevOps, so you need a CI/CD pipeline that will do the rolling updates for you. Look at CodeBuild, CodeDeploy and CodePipeline (and CodeCommit if you integrate your code repository in AWS with your CI/CD)
Read: https://docs.aws.amazon.com/codepipeline/latest/userguide/tutorials-ecs-ecr-codedeploy.html
This is a complex topic, but it pays off in the end.
Judging from what you have said in the comments:
I created my task via the AWS console, I am running just the task definition on its own without service plus service with task definition launched via the EC2 not target both of them, so in the task definition JSON file on my Github both repositories they are tied to a revision of a task (could that be a problem?).
It's difficult to understand exactly how you have this set up and it'd probably be a good idea for you to go back and understand the services you are using a little better using the guide you are following or AWS documentation. Pushing a new task definition does not automatically update services to use the new definition.
That said, my guess is that you need to update the service in ECS to use the latest task definition. You can do that in many ways:
Through the console (https://docs.aws.amazon.com/AmazonECS/latest/developerguide/update-service-console-v2.html).
Through the CLI (https://docs.aws.amazon.com/cli/latest/reference/ecs/update-service.html).
Through the IaC like the CDK (https://docs.aws.amazon.com/cdk/api/latest/docs/aws-ecs-readme.html).
This can be automated but you would need to set up a process to automate it.
I would recommend reading some guides on how you could automate deployment and updates using the CDK. Amazon provide a good guide to get you started https://docs.aws.amazon.com/cdk/latest/guide/ecs_example.html.

how to deploy code on multiple instances Amazon EC2 Autocaling group?

So we are launching an ecommerce store built on magento. We are looking to deploy it on Amazon EC2 instance using RDS as database service and using amazon auto-scaling and elastic load balancer to scale the application when needed.
What I don't understand is this:
I have installed and configured my production magento enviorment on an EC2 instance (database is in RDS). This much is working fine. But now when I want to dynamically scale the number of instances
how will I deploy the code on the dynamically generated instances each time?
Will aws copy the whole instance assign it a new ip and spawn it as a
new instance or will I have to write some code to automate this
process?
Plus will it not be an overhead to pull code from git and deploy every time a new instance is spawned?
A detailed explanation or direction towards some resources on the topic will be greatly appreciated.
You do this in the AutoScalingGroup Launch Configuration. There is a UserData section in the LaunchConfiguration in CloudFormation where you would write a script that is ran when ever the ASG scales up and deploys a new instance.
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-as-launchconfig.html#cfn-as-launchconfig-userdata
This is the same as the UserData section in an EC2 Instance. You can use LifeCycle hooks that will tell the ASG not to put the EC2 instance into load until everything you want to have configured it set up.
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-as-lifecyclehook.html
I linked all CloudFormation pages, but you may be using some other CI/CD tool for deploying your infrastructure, but hopefully that gets you started.
To start, do check AWS CloudFormation. You will be creating templates to design how the infrastructure of your application works ~ infrastructure as code. With these templates in place, you can rollout an update to your infrastructure by pushing changes to your templates and/or to your application code.
In my current project, we have a github repository dedicated for these infrastructure templates and a separate repository for our application code. Create a pipeline for creating AWS resources that would rollout an updated to AWS every time you push to the repository on a specific branch.
Create an infrastructure pipeline
have your first stage of the pipeline to trigger build whenever there's code changes to your infrastructure templates. See AWS CodePipeline and also see AWS CodeBuild. These aren't the only AWS resources you'll be needing but those are probably the main ones, of course aside from this being done in cloudformation template as mentioned earlier.
how will I deploy the code on the dynamically generated instances each time?
Check how containers work, it would be better and will greatly supplement on your learning on how launching new version of application work. To begin, see docker, but feel free to check any resources at your disposal
Continuation with my current project: We do have a separate pipeline dedicated for our application, but will also get triggered after our infrastructure pipeline update. Our application pipeline is designed to build a new version of our application via AWS Codebuild, this will create an image that will become a container ~ from the docker documentation.
we have two triggers or two sources that will trigger an update rollout to our application pipeline, one is when there's changes to infrastructure pipeline and it successfully built and second when there's code changes on our github repository connected via AWS CodeBuild.
Check AWS AutoScaling , this areas covers the dynamic launching of new instances, shutting down instances when needed, replacing unhealthy instances when needed. See also AWS CloudWatch, you can design criteria with it to trigger scaling down/up and/or in/out.
Will aws copy the whole instance assign it a new ip and spawn it as a new instance or will I have to write some code to automate this process?
See AWS ElasticLoadBalancing and also check out more on AWS AutoScaling. On the automation process, if ever you'll push through with CloudFormation, instance and/or containers(depending on your design) will be managed gracefully.
Plus will it not be an overhead to pull code from git and deploy every time a new instance is spawned?
As mentioned, earlier having a pipeline for rolling out new versions of your application via CodeBuild, this will create an image with the new code changes and when everything is ready, it will be deployed ~ becomes a container. The old EC2 instance or the old container( depending on how you want your application be deployed) will be gracefully shut down after a new version of your application is up and running. This will give you zero downtime.

AWS CLI vs Console and CloudFormation stacks

Is there any known downside to creating resources on aws through the CLI? Is it more reliable/easier/error prone/largely accepted/recommended to use one method over the other? While setting up recurring scripts, is there a reason why i would want to use CloudFormation or the AWS Console over the AWS CLI to run commands directly?
For example, if I were to create an ECS Fargate Task Definition, is there any reason why I might want to use AWS CloudFormation or the Console over AWS CLI? Cli syntax is straightforward and easy to use, and there are a few things (like setting up event rules/targets for a fargate task specifically) that are not supported via cloudformation yet.
The AWS CLI and AWS CloudFormation are two different tools that can be used to create infrastructure on AWS. The CLI is more powerful and has finer grained control than CloudFormation. CloudFormation makes it very easy to use yaml or json text files that can describe an entire enterprise in the cloud.
One of the strong benefits of CloudFormation is the automatic support for rolling back changes if anything fails while deploying a stack. The CLI in comparison would require you to figure out the details of what went wrong and how to get back to where your state was. Updating infrastructure using CloudFormation is another benefit. Make the change in the template and update the stack.
For small setups, using the CLI is fine. However, once you get past launching an EC2 instance and start building VPCs, Instances, KeyPairs, Security Groups, RDS, etc. etc. you will find that the CLI has some real limitations: mostly being too manual of a process, not easily repeatable, difficult to put the process into version control, ....
If you are constantly building, testing and deleting complex setups, CloudFormation is absolutely one of the best tools from AWS. Note that there are a number of third party solutions that have a huge number of followers such as Bamboo, Octopus, Jenkins, Chef, etc.
If your job is SysOps or DevOps then you absolutely want to master the CLI and CloudFormation. These are amazing tools for working with AWS. Also master Beanstalk, maybe OpsWorks and one of the third party tools like Jenkins.

Exploring tools to trigger build script to rollout specific git branch to a subset of the amazon ec2 instances

We have multiple amazon ec2 instances behind a load balancer. Our build script is written in phing and is integrated with git.
We are looking for a tool (like Jenkins or Amazon code deploy) which could display all the active instances currently behind load balancer and then allow us to select some of them (or select a group defined previously) and then trigger either of the following (whichever is better) -
a build script hosted on the same dedicated server where the tool is hosted.
or the respective build scripts hosted on the selected ec2 instances.
We should be able to do the following -
specify a git branch name, optionally, when we trigger the build script for any group of instances.
be able to roll out in batches of boxes, so as to get some time to monitor load, and then move to next batch if all is good. Best way, I guess, would be to specify a size of the batch (e.g. 10), so that the process waits for a user prompt after rollout on every batch completes.
So, if we have to rollout two different git branches to two groups of instances, we should be able to run them in two steps (if we do not specify batch size).
Would like to know about experiences of people who dealt with something similar.
For CodeDeploy, it supports Git (more precisely, GitHub). It also allows you to deploy only to tagged EC2 instances. If combined with custom DeploymentConfig (http://docs.aws.amazon.com/codedeploy/latest/userguide/how-to-create-deployment-configuration.html), you can also control how fast (the size of the batch) to deploy.
I would re-structure the question:
The choices you have for application deployment
and whether the tool has option to perform rolling deployments.
Jenkins is software for CI/CD, which will have to use plugins,custom scripting or leverage an existing orchestration software setup for doing the deployments.
For software orchestration, you have many choices, some of the more famous tools are Chef, puppet, ansible etc.. All of these would need you to manage some kind of centralized setup. All such software support application deployment.
You need to make a decision on whether you would want to invest in maintaining such a setup.
If you decide against such a setup, you have the option of using managed services such as AWS OpsWorks, AWS CodeDeploy, hosted chef etc.
In choosing any of these services, you delegate the management of orchestration software to a vendor, which will ensure the service is up all the time.
AWS code deploy and AWS OpsWorks are managed services on aws and work pretty well on AWS setups.
AWS OpsWorks uses chef under the hood.
AWS CodeDeploy only provides a subset of what OpsWorks provides and is responsible only for deployments. With AWS code deploy you get convenient visualization of your software deployments through AWS console.
With AWS code deploy, you can achieve the goal of partial roll out to ec2 instances.
You can do the same with other tools as well but CodeDeploy on AWS environment will take least amount of work.
CodeDeploy also allows you to deploy from GIT. Please refer to the following aws documentation
http://docs.aws.amazon.com/codedeploy/latest/userguide/github-integ-tutorial.html
The pitfall with code deploy is the fact that the agent that will run on instances has been tested for and is supported for only a limited number of OS combinations.(http://docs.aws.amazon.com/codedeploy/latest/userguide/how-to-run-agent.html#how-to-run-agent-supported-oses)
Also in future if you decide to move away from AWS, you will have to redo the deployment related work.
CodeDeploy service only charges you for the underneath AWS resources.
Please find the link to pricing documentation below:
https://aws.amazon.com/codedeploy/pricing/