I am using AWS step function and I made some changes in state machine and activity worker. Activity code changes are not backward compatible. Also, activity code is deployed in my hosts (not hosted in AWS Lambda).
I saw some examples online to follow Blue-Green deplopyment: https://theburningmonk.com/2019/08/how-to-do-blue-green-deployment-for-step-functions/
But I am not using AWS Lambda, so what are the ways I can deploy my changes which are not backward compatible. How to do blue green deployment in aws step function when activites are hosted in non-AWS hosts.
so what are the ways I can deploy my changes which are not backward compatible
Regardless of the tech, this is what exactly b/g deployment trying to prevent from happening.
In a breaking glass case, I don't think you even need a blue/green deployment.
Related
I am studying for my AWS Cloud Practitioner Certification and I am confused with the difference between AWS Lambda & AWS Elastic Beanstalk. From my understanding, for both services you upload your code to AWS and AWS essentially manages the underlying infrastructure for you.
I know with Lambda you upload your code to a 'Lambda Function' and set triggers for when the code executes.
With AWS EB you upload your application code and EB automatically handles the deployment, capacity, provisioning, etc...
They both sound very similar as you upload your code to both and both handle underlying instances/environments.
Thanks!
Elastic beanstalk and lambda are very different though some of the features may look similar. At high level, elastic beanstalk deploys a long running application whereas lambda deploys short running code function
Lambda can at maximum run for 15 minutes, whereas EB can run continuously. Generally, we deploy websites/apps on EB whereas lambda are generally used for triggered functionality like processing image when image gets uploaded to S3.
Lambda can only handle one request at a time whereas number of concurrent requests EB can handle depends on your underlying infrastructure. So, if you are having say 100 requests, 100 lambdas will be created whereas these 100 requests can be handled by one underlying EC2 instance in EB
Lambda is serverless (underlying infra is entirely abstracted from developer). Whereas EB is automation over infra provisioning. You can still see your EC2 instances, load balancer, auto scaling group etc. in your AWS console. You can even ssh/rdp to your instance and change running services. AWS EB allows you also to have your custom AMIs.
Lambda is having issue of cold starts as in lambda, infra needs to be provisioned on demand by AWS, whereas in EB, you generally have EC2 instances already provisioned to handle your requests.
All great (and exam-specific) points by SmartCoder. If I may add a general ancillary comment:
Wittgenstein said, "In most cases, the meaning of a word is its use." I think this maxim is remarkably apt for software engineering too. In the context of your question, those two AWS services are used for significantly different purposes.
Lambda - Say you developed a photo uploading application with Node.js that uploads some processed images to an S3 bucket. The core logic for this is probably quite straightforward, and it's got a singular, distinct task. Simply take in an image, do some processing and if not for any exception, store it in a bucket. In this case, it's inefficient to waste time spinning up servers, configuring them with a runtime environment, downloading dependencies, maintenance, etc. A literal copy and paste of your code into the Lambda console while setting up a few configurations should get your job done. Plus, you save a lot of money as infrastructure is "provisioned" only when your Node.js function is invoked. Again, keep in mind the principle of this code performing a singular task.
Elastic Beanstalk - This same photo uploading system mentioned above might now mature into a more complex full-fledged software application that requires user management, authentication, and further processing of the images, which certainly requires more provisioning of resources. This application will probably do a lot of things with multiple code repositories for you to manage and deploy. And yet, you don't want to spend money on a DevOps engineer or learn to use an IaC (Infrastructure as Code) platform like CloudFormation or Terraform. In this case, Elastic Beanstalk is useful for a developer without too much in-depth DevOps knowledge as it's a PaaS (Platform as a Service) tool; it pretty much gives you a clear interface to spin up whole new production-ready systems.
Here are two good whitepapers I read a while back on the above topics.
https://docs.aws.amazon.com/whitepapers/latest/serverless-architectures-lambda/serverless-architectures-lambda.pdf
https://docs.aws.amazon.com/whitepapers/latest/introduction-devops-aws/introduction-devops-aws.pdf
Lambda is run based on specific trigger events.. and it exits as soon as its work is over.
We have a long running EMR cluster that has multiple libraries installed on it using bootstrap actions. Some of these libraries are under continuous development and their codebase is on GitHub.
I've been looking to plug Travis CI with AWS EMR in a similar way to Travis and CodeDeploy. The idea is to get the code on GitHub tested and deployed automatically to EMR while using bootstrap actions to install the updated libraries on all EMR's nodes.
A solution I came up with is to use an EC2 instance in the middle, where Travis and CodeDeploy can be first used to deploy the code on the instance. After that a lunch script on the instance is triggered to create a new EMR cluster with the updated libraries.
However, the above solution means we need to create a new EMR cluster every time we deploy a new version of the system
Any other suggestions?
You definitely don't want to maintain an EC2 instance to orchestrate a CI/CD process like that. First of all, it introduces a number of challenges because then you need to deal with an entire server instance, keep it maintained, deal with networking, apply monitoring and alerts to deal with availability issues, and even then, you won't have availability guarantees, which may cause other issues. Most of all, maintaining an EC2 instance for a purpose like that simply is unnecessary.
I recommend that you investigate using Amazon CodePipeline with a Lambda Step Function.
The Step Function can be used to orchestrate the provisioning of your EMR cluster in a fully serverless environment. With CodePipeline, you can setup a web hook into your Github repo to pull your code and spin up a new deployment automatically whenever changes are committed to your master Github branch (or whatever branch you specify). You can use EMRFS to sync an S3 bucket or folder to your EMR file system for your cluster and then obtain the security benefits of IAM, as well as additional consistency guarantees that come with EMRFS. With Lambda, you also get seamless integration into other services, such as Kinesis, DynamoDB, and CloudWatch, among many others, that will simplify many administrative and development tasks, as well as enable you to have more sophisticated automation with minimal effort.
There are some great resources and tutorials for using CodePipeline with EMR, as well as in general. Here are some examples:
https://aws.amazon.com/blogs/big-data/implement-continuous-integration-and-delivery-of-apache-spark-applications-using-aws/
https://docs.aws.amazon.com/codepipeline/latest/userguide/tutorials-ecs-ecr-codedeploy.html
https://chalice-workshop.readthedocs.io/en/latest/index.html
There are also great tutorials for orchestrating applications with Lambda Step Functions, including the use of EMR. Here are some examples:
https://aws.amazon.com/blogs/big-data/orchestrate-apache-spark-applications-using-aws-step-functions-and-apache-livy/
https://aws.amazon.com/blogs/big-data/orchestrate-multiple-etl-jobs-using-aws-step-functions-and-aws-lambda/
https://github.com/DavidWells/serverless-workshop/tree/master/lessons-code-complete/events/step-functions
https://github.com/aws-samples/lambda-refarch-imagerecognition
https://github.com/aws-samples/aws-serverless-workshops
In the very worst case, if all of those options fail, such as if you need very strict control over the startup process on the EMR cluster after the EMR cluster completes its bootstrapping, you can always create a Java JAR that is loaded as a final step and then use that to either execute a shell script or use the various Amazon Java libraries to run your provisioning commands. In even this case, you still have no need to maintain your own EC2 instance for orchestration purposes (which, in my opinion, still would be hard to justify even if it was running in a Docker container in Kubernetes) because you can easily maintain that deployment process as well with a fully serverless approach.
There are many great videos from the Amazon re:Invent conferences that you may want to watch to get a jump start before you dive into the workshops. For example:
https://www.youtube.com/watch?v=dCDZ7HR7dms
https://www.youtube.com/watch?v=Xi_WrinvTnM&t=1470s
Many more such videos are available on YouTube.
Travis CI also supports Lambda deployment, as mentioned here: https://docs.travis-ci.com/user/deployment/lambda/
I am using AWS Elastic Beanstalk and have deployed my nodejs app on it. Now I want to automate this process i.e committing changes to Github and then automatically reflecting those changes in app. Now I have two options, use whether Elastic Beanstalk or using Code Deploy.
I have searched on both services,
I can automate using deployBot with elastic beanstalk or using
jenkins plugin for automation (AWS Elastic Beanstalk Deployment
Plugin) for elastic beanstalk.
Also found this link to automate:
https://aws.amazon.com/blogs/devops/building-continuous-deployment-on-aws-with-aws-codepipeline-jenkins-and-aws-elastic-beanstalk/
I can also use AWS CodeDeploy service for automation to deploy my app on EC2
instances using CodeCommit , code pipeline.
In case of code deploy I can also do by using this:
https://aws.amazon.com/blogs/devops/automatically-deploy-from-github-using-aws-codedeploy/
Now both services can be used , but which one is more suitable to use. That will automate my process whether using AWS Elastic Beanstalk or AWS Code Deploy.
The biggest difference is, that:
CodeDeploy is the service that deploys your application to the existing EC2 instance(s). It does not take into account LoadBalancing or scaling etc.
ElasticBeanstalk is more of the PaaS service, that provides you all the wrapping you need to scale your application so you don't need to worry about the DevOps aspect. Like monitoring, scaling etc.
I found this image to describe the differences nicely. Including as well OpsWorks:
If you want to read more about differences of CodeDeploy, Elastic Beanstalk or OpsWorks, check out AWS own document: https://d0.awsstatic.com/whitepapers/overview-of-deployment-options-on-aws.pdf
The answer is very simple. ElasticBeanstalk offers cookie-cutter automated deployments based on a set of AWS common practices. CodeDeploy is broadly configurable and customizable.
You should use ElasticBeanstalk until you find a use case that cannot be resolved without using CodeDeploy (two use cases suggested by the AWS Documentation posted by Maksim Luzik are deploying to EC2 instances managed internally by your organization and deploying to EC2 instances for third-party integration).
Use the second option instead of using third party tools as AWS platform is supporting to deploy your app using git or bitbucket using python based scripts.
I have worked with both tools and both are great for respective jobs. I found ElasticBeans task convenient but lesser flexible when It comes to work with custom platforms.
I am using codeDeploy in my current application. I decided so because of following use cases.
I am using debian based platform. Elastic BeansTalks does not offer that platform in its default list of available platforms. So what's the point if I need to create custom AMI.
I I have 2 type of applications built on the top of same code base. One is Web and other executes couple of queues in the background. I need to release same code on both type of applications so that's why I found codeDeploy does better job.
The purpose is production-level deployment of a 8-container application, using swarm.
It seems (ECS aside) we are faced with 2 options:
Use the so called docker-for-aws that does (swarm) provisioning via a cloudformation template.
Set up our VPC as usual, install docker engines, bootstrap the swarm (via init/join etc) and deploy our application in normal EC2 instances.
Is the only difference between these two approaches the swarm bootstrap performed by docker-for-aws?
Any other benefits of docker-for-aws compared to a normal AWS VPC provisioning?
Thx
If you need to provide a portability across different cloud providers - go with AWS CloudFormation template provided by Docker team. If you only need to run on AWS - ECS should be fine. But you will need to spend a bit of time on figuring out how service discovery works there. Benefit of Swarm is that they made it fairly simple, just access your services via their service name like they were DNS names with built-in load-balancing.
It's fairly easy to automate new environment creation with it and if you need to go let's say Azure or Google Cloud later - you simply use template for them to get your docker cluster ready.
Docker team has put quite a few things into that template and you really don't want to re-create them yourself unless you really have to. For instance if you don't use static IPs for your infra (fairly typical scenario) and one of the managers dies - you can't just restart it. You will need to manually re-join it to the cluster. Docker for AWS handles that through IPs sync via DynamoDB and uses other provider specific techniques to make failover / recovery work smoothly. Another example is logging - they push your logs automatically into CloudWatch, which is very handy.
A few tips on automating your environment provisioning if you go with Swarm template:
Use some infra automation tool to create VPC per environment. Use some template provided by that tool so you don't write too much yourself. Using a separate VPC makes all environment very isolated and easier to work with, less chance to screw something up. Also, you're likely to add more elements into those environments later, such as RDS. If you control your VPC creation it's easier to do that and keep all related resources under the same one. Let's say DEV1 environment's DB is in DEV1 VPC
Hook up running AWS Cloud Formation template provided by docker to provision a Swarm cluster within this VPC (they have a separate template for that)
My preference for automation is Terraform. It lets me to describe a desired state of infrastructure rather than on how to achieve it.
I would say no, there are basically no other benefits.
However, if you want to achieve all/several of the things that the docker-for-aws template provides I believe your second bullet point should contain a bit more.
E.g.
Logging to CloudWatch
Setting up EFS for persistence/sharing
Creating subnets and route tables
Creating and configuring elastic load balancers
Basic auto scaling for your nodes
and probably more that I do not recall right now.
The template also ingests a bunch of information about related resources to your EC2 instances to make it readily available for all Docker services.
I have been using the docker-for-aws template at work and have grown to appreciate a lot of what it automates. And what I do not appreciate I change, with the official template as a base.
I would go with ECS over a roll your own solution. Unless your organization has the effort available to re-engineer the services and integrations AWS offers as part of the offerings; you would be artificially painting yourself into a corner for future changes. Do not re-invent the wheel comes to mind here.
Basically what #Jonatan states. Building the solutions to integrate what is already available is...a trial of pain when you could be working on other parts of your business / application.
We have multiple amazon ec2 instances behind a load balancer. Our build script is written in phing and is integrated with git.
We are looking for a tool (like Jenkins or Amazon code deploy) which could display all the active instances currently behind load balancer and then allow us to select some of them (or select a group defined previously) and then trigger either of the following (whichever is better) -
a build script hosted on the same dedicated server where the tool is hosted.
or the respective build scripts hosted on the selected ec2 instances.
We should be able to do the following -
specify a git branch name, optionally, when we trigger the build script for any group of instances.
be able to roll out in batches of boxes, so as to get some time to monitor load, and then move to next batch if all is good. Best way, I guess, would be to specify a size of the batch (e.g. 10), so that the process waits for a user prompt after rollout on every batch completes.
So, if we have to rollout two different git branches to two groups of instances, we should be able to run them in two steps (if we do not specify batch size).
Would like to know about experiences of people who dealt with something similar.
For CodeDeploy, it supports Git (more precisely, GitHub). It also allows you to deploy only to tagged EC2 instances. If combined with custom DeploymentConfig (http://docs.aws.amazon.com/codedeploy/latest/userguide/how-to-create-deployment-configuration.html), you can also control how fast (the size of the batch) to deploy.
I would re-structure the question:
The choices you have for application deployment
and whether the tool has option to perform rolling deployments.
Jenkins is software for CI/CD, which will have to use plugins,custom scripting or leverage an existing orchestration software setup for doing the deployments.
For software orchestration, you have many choices, some of the more famous tools are Chef, puppet, ansible etc.. All of these would need you to manage some kind of centralized setup. All such software support application deployment.
You need to make a decision on whether you would want to invest in maintaining such a setup.
If you decide against such a setup, you have the option of using managed services such as AWS OpsWorks, AWS CodeDeploy, hosted chef etc.
In choosing any of these services, you delegate the management of orchestration software to a vendor, which will ensure the service is up all the time.
AWS code deploy and AWS OpsWorks are managed services on aws and work pretty well on AWS setups.
AWS OpsWorks uses chef under the hood.
AWS CodeDeploy only provides a subset of what OpsWorks provides and is responsible only for deployments. With AWS code deploy you get convenient visualization of your software deployments through AWS console.
With AWS code deploy, you can achieve the goal of partial roll out to ec2 instances.
You can do the same with other tools as well but CodeDeploy on AWS environment will take least amount of work.
CodeDeploy also allows you to deploy from GIT. Please refer to the following aws documentation
http://docs.aws.amazon.com/codedeploy/latest/userguide/github-integ-tutorial.html
The pitfall with code deploy is the fact that the agent that will run on instances has been tested for and is supported for only a limited number of OS combinations.(http://docs.aws.amazon.com/codedeploy/latest/userguide/how-to-run-agent.html#how-to-run-agent-supported-oses)
Also in future if you decide to move away from AWS, you will have to redo the deployment related work.
CodeDeploy service only charges you for the underneath AWS resources.
Please find the link to pricing documentation below:
https://aws.amazon.com/codedeploy/pricing/