I am using AWS Elastic Beanstalk and have deployed my nodejs app on it. Now I want to automate this process i.e committing changes to Github and then automatically reflecting those changes in app. Now I have two options, use whether Elastic Beanstalk or using Code Deploy.
I have searched on both services,
I can automate using deployBot with elastic beanstalk or using
jenkins plugin for automation (AWS Elastic Beanstalk Deployment
Plugin) for elastic beanstalk.
Also found this link to automate:
https://aws.amazon.com/blogs/devops/building-continuous-deployment-on-aws-with-aws-codepipeline-jenkins-and-aws-elastic-beanstalk/
I can also use AWS CodeDeploy service for automation to deploy my app on EC2
instances using CodeCommit , code pipeline.
In case of code deploy I can also do by using this:
https://aws.amazon.com/blogs/devops/automatically-deploy-from-github-using-aws-codedeploy/
Now both services can be used , but which one is more suitable to use. That will automate my process whether using AWS Elastic Beanstalk or AWS Code Deploy.
The biggest difference is, that:
CodeDeploy is the service that deploys your application to the existing EC2 instance(s). It does not take into account LoadBalancing or scaling etc.
ElasticBeanstalk is more of the PaaS service, that provides you all the wrapping you need to scale your application so you don't need to worry about the DevOps aspect. Like monitoring, scaling etc.
I found this image to describe the differences nicely. Including as well OpsWorks:
If you want to read more about differences of CodeDeploy, Elastic Beanstalk or OpsWorks, check out AWS own document: https://d0.awsstatic.com/whitepapers/overview-of-deployment-options-on-aws.pdf
The answer is very simple. ElasticBeanstalk offers cookie-cutter automated deployments based on a set of AWS common practices. CodeDeploy is broadly configurable and customizable.
You should use ElasticBeanstalk until you find a use case that cannot be resolved without using CodeDeploy (two use cases suggested by the AWS Documentation posted by Maksim Luzik are deploying to EC2 instances managed internally by your organization and deploying to EC2 instances for third-party integration).
Use the second option instead of using third party tools as AWS platform is supporting to deploy your app using git or bitbucket using python based scripts.
I have worked with both tools and both are great for respective jobs. I found ElasticBeans task convenient but lesser flexible when It comes to work with custom platforms.
I am using codeDeploy in my current application. I decided so because of following use cases.
I am using debian based platform. Elastic BeansTalks does not offer that platform in its default list of available platforms. So what's the point if I need to create custom AMI.
I I have 2 type of applications built on the top of same code base. One is Web and other executes couple of queues in the background. I need to release same code on both type of applications so that's why I found codeDeploy does better job.
Related
Currently we have Jenkins that is running on-premise(VMware), planning to move into the cloud(aws). What would be the best approach to install Jenkins whether on ec2 or ECS?
Best way would be running on EC2. Make sure you have granular control over your instance Security Group and Network ACL's. I would recommend using terraform to build your environment as you can write code and also version control it. https://www.terraform.io/downloads.html
Have you previously containerized your Jenkins? On VMWare itself? If not, and if you are not having experience with containers, go for EC2. It will be as easy as running on any other VM. For reproducing the infrastructure, use Terraform or CloudFormartion.
I would recommend dockerize your on-premise Jenkins first. See how much efforts are required in implementation and administrating/scaling it. Then go for ECS.
Else, shift to EC2 and see how much admin overhead + costs you are billed. Then if required, go for ECS.
Another point you have to consider is how your Jenkins is architected. Are you using master-slave? Are you running builds contentiously so that VMs are never idle? Do you want easy scaling such that build environment is created and destroyed per build execution?
If you have no experience with running containers then create it on EC2. Before running on ECS make sure you really understand containers and container orchestration.
Just want to complement the other answers by providing link to official AWS white paper:
Jenkins on AWS
It might be of special interest as it discusses both options in detail: EC2 and ECS:
In this section we discuss two approaches to deploying Jenkinson AWS. First, you could use the traditional deployment on top of Amazon Elastic Compute Cloud (Amazon EC2). Second, you could use the containerized deployment that leverages Amazon EC2 Container Service (Amazon ECS).Both approaches are production-ready for an enterprise environment.
There is also AWS sample solution for Jenkins on AWS for ECS:
https://github.com/aws-samples/jenkins-on-aws:
This project will build and deploy an immutable, fault tolerant, and cost effective Jenkins environment in AWS using ECS. All Jenkins images are managed within the repository (pulled from upstream) and fully configurable as code. Plugin installation is automated, including versioning, as well as configured through the Configuration as Code plugin.
I am trying to find a solution for configuration management using AWS OpsWorks. What I can see is AWS offers three services for OpsWorks
Chef Automate
Puppet
AWS stacks
I have read basics of all three of them but unable to compare between three of them. I am unable to understand when to use which solution.
I want to implemnet a solution for my multiple EC2 instances, using which I can deliver updates to all my instances from a central repository(github). And, rollback changes if needed.
So following are my queries:
Which of the three solutions is best for this use case?
What should I use if my instances are in different regions?
I am unable to find anything useful on these topics so that I can make my decision. It would be great if I can get links to some useful articles as well.
Thanks in advance.
Terraform, Packer and Ansible are a great resource, I use them everyday to configure AMI's and build out all my infrastructure.
Terraform - Configuration Management for Infrastructure, it allows you to provision all the AWS, Azure, GCE components you needs to run your application.
Packer - Creates reusable images by pre installing software that is common to your applications.
Ansible - pre and post provisioning configuration management. You can use Ansible with Packer to provision software in an AMI, then if needed, use Ansible to configure it after provisioning. There is no need for a chef server or puppet master, you can run Ansible from your desktop if you have access to the cloud servers.
This examples provisions all the infrastructure for a Wordpress site, and uses Ansible to configure it post provisioning.
https://github.com/strongjz/tf-wordpress
All of this as well can automated in a Jenkins pipeline or with other Continous Deployment tools like CircleCI etc.
Ansible has no restriction on regions, neither does Terraform. Packer is a local build tool or on a CD server.
Examples:
https://www.terraform.io/intro/examples/aws.html
https://github.com/ansible/ansible-examples
https://www.packer.io/intro/getting-started/build-image.html
We have multiple amazon ec2 instances behind a load balancer. Our build script is written in phing and is integrated with git.
We are looking for a tool (like Jenkins or Amazon code deploy) which could display all the active instances currently behind load balancer and then allow us to select some of them (or select a group defined previously) and then trigger either of the following (whichever is better) -
a build script hosted on the same dedicated server where the tool is hosted.
or the respective build scripts hosted on the selected ec2 instances.
We should be able to do the following -
specify a git branch name, optionally, when we trigger the build script for any group of instances.
be able to roll out in batches of boxes, so as to get some time to monitor load, and then move to next batch if all is good. Best way, I guess, would be to specify a size of the batch (e.g. 10), so that the process waits for a user prompt after rollout on every batch completes.
So, if we have to rollout two different git branches to two groups of instances, we should be able to run them in two steps (if we do not specify batch size).
Would like to know about experiences of people who dealt with something similar.
For CodeDeploy, it supports Git (more precisely, GitHub). It also allows you to deploy only to tagged EC2 instances. If combined with custom DeploymentConfig (http://docs.aws.amazon.com/codedeploy/latest/userguide/how-to-create-deployment-configuration.html), you can also control how fast (the size of the batch) to deploy.
I would re-structure the question:
The choices you have for application deployment
and whether the tool has option to perform rolling deployments.
Jenkins is software for CI/CD, which will have to use plugins,custom scripting or leverage an existing orchestration software setup for doing the deployments.
For software orchestration, you have many choices, some of the more famous tools are Chef, puppet, ansible etc.. All of these would need you to manage some kind of centralized setup. All such software support application deployment.
You need to make a decision on whether you would want to invest in maintaining such a setup.
If you decide against such a setup, you have the option of using managed services such as AWS OpsWorks, AWS CodeDeploy, hosted chef etc.
In choosing any of these services, you delegate the management of orchestration software to a vendor, which will ensure the service is up all the time.
AWS code deploy and AWS OpsWorks are managed services on aws and work pretty well on AWS setups.
AWS OpsWorks uses chef under the hood.
AWS CodeDeploy only provides a subset of what OpsWorks provides and is responsible only for deployments. With AWS code deploy you get convenient visualization of your software deployments through AWS console.
With AWS code deploy, you can achieve the goal of partial roll out to ec2 instances.
You can do the same with other tools as well but CodeDeploy on AWS environment will take least amount of work.
CodeDeploy also allows you to deploy from GIT. Please refer to the following aws documentation
http://docs.aws.amazon.com/codedeploy/latest/userguide/github-integ-tutorial.html
The pitfall with code deploy is the fact that the agent that will run on instances has been tested for and is supported for only a limited number of OS combinations.(http://docs.aws.amazon.com/codedeploy/latest/userguide/how-to-run-agent.html#how-to-run-agent-supported-oses)
Also in future if you decide to move away from AWS, you will have to redo the deployment related work.
CodeDeploy service only charges you for the underneath AWS resources.
Please find the link to pricing documentation below:
https://aws.amazon.com/codedeploy/pricing/
My app is created using elastic-beanstalk aws service, do I need to use the AWS CodeDeploy service to deploy my app?
Currently I just do:
eb deploy myApp
Then, a new application version is deployed without using AWS CodeDeploy.
So, AM I doing something wrong?
Elastic beanstalk do it on your behalf. During deployment process you define some polices and roles, which defines elastic beanstalk will call these services on your behalf. Codedeploy is one of that services.
Elasticbeanstalk does automation of your process only and setup thet whole deployment environment for you (php,nginx/apache in case of web), if you look /opt/elasticbeanstalk/, you can see codedeploy folder there, which means that you do not need to do it manually.
AWS code deploy is different workaround and provides more controlling. How you want your changes to be pushed, is it to be pushed on all instances an once or one by one, minimum number of healthy instances.
Check here-
http://cloudacademy.com/blog/how-to-deploy-application-code-from-s3-using-aws-codedeploy/
http://blog.powerupcloud.com/2016/03/24/deployment-automation-using-aws-code-depoly/
https://blogs.aws.amazon.com/application-management/post/Tx33XKAKURCCW83/Automatically-Deploy-from-GitHub-Using-AWS-CodeDeploy
You can update the your application with new version. CLI as follows
$eb deploy --version
You are not doing anything wrong. EB Deploy will enable you to deploy your apps being served from Elastic Beanstalk. AWS Code Deploy on the other hand is more flexible & gives you more control, you can for example, deploy apps you are serving from EC2 thats not being managed by Elastic Beanstalk.
With AWS you can for example deploy to multiple environments ie development, staging, production.
Elastic Beanstalk and CodeDeploy are totally different AWS services and independent of each other and follow different deployment approaches.
What you're doing is totally correct to deploy a new version of your code.
AWS elastic-bean-stalk itself has the nice capability for deploying applications nicely.You dont need to use aws code deploy again.It will be superflous.You can use beanstalk tools itself to deploy the code.
AWS CodeDeploy is a building block service focused on helping developers deploy and update software on any instance, including Amazon EC2 instances and instances running on-premises.
AWS Elastic Beanstalk (as well as AWS OpsWorks btw) are end-to-end application management solutions.
When it comes to deploying new software release on Beanstalk, you better use the own deployment process provided to you by Beanstalk.
eb deploy myApp
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 months ago.
The community reviewed whether to reopen this question 3 months ago and left it closed:
Not suitable for this site
Improve this question
I have developed a .NET MVC application and have started playing around with AWS and deploying it via the Visual Studio Toolkit. I have successfully deployed the application using the Elastic Beanstalk option in the toolkit.
As I was going over the tutorials for deploying .NET apps to AWS with the toolkit, I noticed there are tutorials for deploying with both Elastic Beanstalk and CloudFormation. What is the difference between these two?
From what I can tell, it seems like they both essentially are doing the same thing - making it easier to deploy your application to the AWS cloud (setting up EC2 instances, load balancer, auto-scaling, etc). I have tried reading up on them both, but I can't seem to get anything other than a bunch of buzz-words that sound like the same thing to me. I even found an FAQ on the AWS website that is supposed to answer this exact question, yet I don't really understand.
Should I be using one or the other? Both?
They're actually pretty different. Elastic Beanstalk is intended to make developers' lives easier. CloudFormation is intended to make systems engineers' lives easier.
Elastic Beanstalk is a PaaS-like layer on top of AWS's IaaS services which abstracts away the underlying EC2 instances, Elastic Load Balancers, auto-scaling groups, etc. This makes it a lot easier for developers, who don't want to be dealing with all the systems stuff, to get their application quickly deployed on AWS. It's very similar to other PaaS products such as Heroku, EngineYard, Google App Engine, etc. With Elastic Beanstalk, you don't need to understand how any of the underlying magic works.
CloudFormation, on the other hand, doesn't automatically do anything. It's simply a way to define all the resources needed for deployment in a huge JSON/YAML file. So a CloudFormation template might actually create two Elastic Beanstalk environments (production and staging), a couple of ElasticCache clusters, a DynamoDB table, and then the proper DNS in Route53. I then upload this template to AWS, walk away, and 45 minutes later everything is ready and waiting. Since it's just a plain-text JSON/YAML file, I can stick it in my source control which provides a great way to version my application deployments. It also ensures that I have a repeatable, "known good" configuration that I can quickly deploy in a different region.
For getting started quickly deploying a standard .NET web-application, Elastic Beanstalk is the right service for you.
AWS CloudFormation: "Template-Driven Provisioning"
AWS CloudFormation gives developers and systems administrators an easy way to create and manage a collection of related AWS resources, provisioning and updating them in an orderly and predictable fashion.
CloudFormation (CFn) is a lightweight, low-level abstraction over existing AWS APIs. Using a static JSON/YAML template document, you declare a set of Resources (such as an EC2 instance or an S3 bucket) that correspond to CRUD operations on the AWS APIs.
When you create a CloudFormation stack, CloudFormation calls the corresponding APIs to create the associated Resources, and when you delete a stack, CloudFormation calls the corresponding APIs to delete them. Most (but not all) AWS APIs are supported.
AWS Elastic Beanstalk: "Web Apps Made Easy"
AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker on familiar servers such as Apache, Nginx, Passenger, and IIS.
You can simply upload your code and Elastic Beanstalk automatically handles the deployment, from capacity provisioning, load balancing, auto-scaling to application health monitoring.
Elastic Beanstalk (EB) is a higher-level, managed 'platform as a service' (PaaS) for hosting web applications, similar in scope to Heroku. Rather than deal with low-level AWS resources directly, EB provides a fully-managed platform where you create an application environment using a web interface, select which platform your application uses, create and upload a source bundle, and EB handles the rest.
Using EB, you get all sorts of built-in features for monitoring your application environment and deploying new versions of your application.
Under the hood, EB uses CloudFormation to create and manage the application's various AWS resources. You can customize and extend the default EB environment by adding CloudFormation Resources to an EB configuration file deployed with your application.
Conclusion
If your application is a standard web-tier application using one of Elastic Beanstalk's supported platforms, and you want easy-to-manage, highly-scalable hosting for your application, use Elastic Beanstalk.
If you:
Want to manage all of your application's AWS resources directly;
Want to manage or heavily customize your instance-provisioning or deployment process;
Need to use an application platform not supported by Elastic Beanstalk; or
Just don't want/need any of the higher-level Elastic Beanstalk features
then use CloudFormation directly and avoid the added configuration layer of Elastic Beanstalk.
Cloud Formation is a service that lets you deploy AWS services. You create a template file that describes which services you want. When you deploy that template, Cloud Formation creates the resources for you as a "package". All the resources you defined in your template are started and terminated together. Examples of types of resources that can be created with Cloud Formation are: S3, EC2 instances, AutoScaling, DynamoDb, etc. For EC2, Cloud Formation also gives you the ability to make use of "cfn-init" scripts; which can be used in conjunction with the template to boot strap your instances.
Elastic Beanstalk uses Cloud Formation templates and scipts to: 1. Create a Load Balancer and Auto Scaling Group, 2. Copy your code to S3, 3. Bootstrap an Ec2 instance to Download the code from S3 and deploy it.
Cloud Formation is not as easy to use as EB, but it is much more powerful, because you can create resources other than EC2 instances, control how the cfn-init script, and etc.
There are other differences worth noting. Elastic beanstalk is designed as a container for a single app. I've a set of several websites and services but found it very difficult to deploy multiple websites with beanstalk and was advised, after several attempts, by AWS help to use cloud formation in this situation as it has the extra flexibility.
Theres a really helpful article on bootstrapping AWS cloud formation and updating a running site here thats much clearer than the AWS pages. Still trying to work out if we can deploy from VS straight to the cloud formation template stored on S3 and get it to auto update like beanstalk...
These services are designed to complement each other. AWS Elastic Beanstalk provides an environment to easily deploy and run applications in the cloud. It is integrated with developer tools and provides a one-stop experience for you to manage the lifecycle of your applications. AWS CloudFormation is a convenient provisioning mechanism for a broad range of AWS and third party resources. It supports the infrastructure needs of many different types of applications such as existing enterprise applications, legacy applications, applications built using a variety of AWS resources and container-based solutions (including those built using AWS Elastic Beanstalk).
AWS CloudFormation supports Elastic Beanstalk application environments as one of the AWS resource types. This allows you, for example, to create and manage an AWS Elastic Beanstalk–hosted application along with an RDS database to store the application data. In addition to RDS instances, any other supported AWS resource can be added to the group as well.
Both are for provisioning infrastructure; but they differ in their approach.
Beanstalk: The starting point is the code. I have a NodeJs code I want to upload & run it; please provision the infrastructure for me. (PaaS) Platform as a Service
CloudFormation: The starting point is the infrastructure. Please create an EC2 instance, with one LoadBalancer, Security Group etc so that I can uploaded my NodeJs code to it. Infrastructure as Code (IaC).
Elastic Beanstalk automatically handles the deployment, from capacity provisioning, load balancing, auto-scaling to application health monitoring based on the code you upload to it, where as CloudFormation is an automated provisioning engine designed to deploy entire cloud environments via a JSON script.
Beanstalk: Gives the developer the ability to manage only code and not systems
Cloud Formation: Simplifies and makes everything easier for a Systems Engineer
If a developer or the dev team is looking for a quick MVP testing, the best option is to quickly get deployed with Beanstalk and check.
When a AWS migration happens, systems engineer will get involved in provisioning and Cloud Formation will help a lot and give much more granular control.
Beanstack internally uses cloudformation.
Beanstalk - Basically helpful for software developers.
Example : You want to start the PC quickly and run an application. You don't buy the PC items (harddisk, ram, Processor) separately. You buy a whole CPU or a laptop of a required config. You dont care how its running inside as you want your application to run for you. Beanstalk gives you this feature of everything ready made with no worries.
Cloudformation - Basically helpful for system engineer/ Hardware.
Example : You want to assemble 100's of PC's and give it to the developers then instead of assembling so many PC's you can just give a list of items and the PC is assembled for you by the retailer.
Similarly create a template and send it to cloudformation it will finish your work with no effort.