Among other things, each instance will need to have a growing number of hosts as bindings in IIS. There are additional OS-configurations that will need to be made outside of what Beanstalk's intentions are, but I believe the answer to the question will be the same.
So my question is simply, how are the new instances created? Are they as basic as just spinning up a new default instance with just my application, or do the new instances appear with all of the same custom IIS binding/OS configuration as others (my minimum instances) have?
Elastic beanstalk create new instances in the following way:
1. Create new instance based on ami defined in the environment configuration.
On the created instance:
2. When instance created run bootstrap scripts on instance in order to setup it one.
3. Downloads from s3 current version of the application bundle.
4. Runs elastic beanstalk hooks which runs configuration files from .ebextentions and perform deploy of application.
5. Application up and running:)
Related
I need to install a gRPC PHP extension on my elastic beanstalk created EC2 instances. I have auto-scaling enabled, and when a new EC2 instance is kicked in, I lose all my installations.
From the documentation, I found two ways to fix this:
Create an instance and download everything required and take an image of that instance. And add the image id (AMI ID) in the Elastic Beanstalk environment (Under Configuration -> Instances). And every new instance created by auto-scaling will be from the image I provide. This approach never worked for me. Am I missing something here?
Write a config file in the .ebextensions to automatically install all the required extensions whenever a new instance is kicked in. And for this, we need to create a yaml/json file as per the documentation in cloud.google.com/php/grpc.
Can someone guide which approach should be taken? And help me create yaml/json file to automate the process for all the instances in auto scaling?
As per the AWS documentation here, to customize your Elastic Beanstalk environment you should use .ebextensions configuration files.
Creating .ebextensions provides the ability to completely customize the instances and environment that your application is running on/in, and makes upgrades, changes and/or additions to your instances and environment straightforward and efficient.
As a sidenote, ssh’ing to ElasticBeanstalk instances, and making on-instance changes, should be avoided. The autoscaling issue you are facing is one reason, however the other major reason is that making changes on the instance itself will cause the instances state to be out of sync with the EB state is expecting. If the state is out of sync, subsequent deployments will fail because the application version EB is expecting has drifted. Managing your application and environment through code and .ebextensions eliminates this issue.
I have two ec2 servers named Ec2-Webserver-1 and EC2-WebServer-2 inside same VPC under two different subnets served by Application Load Balancer.
When I made small changes to the first servers, Then I have to manually change the another server too. Otherwise I have to create an AMI and create a new server from the AMI.
I think, creating AMI each time when I made little changes is not the appropriate one.
Is there any other tools in AWS or third-party tools that can auto replicate the changes made on Server 1 to Server 2? I am currently using CentOS AMI.
I would suggest look into cloudformation. You can define your ec2, what IAM roles you want it to have and a whole lot of other stuff. Once that is done you can just run the cloudformation script and AWS will provision the EC2 with your defined settings automatically. CloudFormation link
You should be looking into Code Deploy https://aws.amazon.com/codedeploy/getting-started/?nc=sn&loc=4 Possibly combine it with Code Pipeline. Here is a starting point for deciding whether you need one or both. https://forums.aws.amazon.com/thread.jspa?threadID=172485
I was using a free tier aws account in which I had one ec2 machine (Linux). I have a simple website with backend server running on django at 8000 port and front end server written in angular and running on http (80) port. I used nginx for https and redirection of calls to backend and frontend server.
Now for backend build system, I did these 3 main steps (which I automated by running jenkins on the same machine).
1) git pull (Pull the latest code from repo).
2) Do migrations (Updating my db with any new table).
3) Restarting the django server. (I was using gunicorn).
Now, I split my front end and backend server into 2 different machines using auto scaling groups and I am now using ELB (Aws Elastic Load balancer) to route the requests. I am done with the setup. But now I am having problem in continuous deployment. The main thing is that ELB uses auto scaling groups which in turn uses AMI.
Now, since AMI's are created once, my first question is how to automate this process and deploy my latest code in already running aws servers.
Second, if I want to run few steps just once for all the servers like my second step of updating db with new tables then how to achieve that.
And also third if these steps need to run on a machine, then do I need to have another ec2 instance to automate the process of creating AMI, updating auto scaling groups with it and then deploying latest code in that.
So, basically I want to know the best practices that people follow in deploying latest code in aws machines that were created by auto scaling groups with the help of AMI. Also I use bitbucket for code management.
First Question: how to automate 'package based deployment'.
Instead of creating a new AMI for every release, create a baseline AMI which only changes when your new release require OS changes / security patches / etc. Look into tools such as packer to create AMIs automatically. In order to automate your code deployment when it changes, you can use a package-based deployment approach, which means you create a package for every release (Should be part of your CI process), which is stored in some repository such as Nexus, Artifactory, or even a simple S3 bucket.
When you deploy a new instance of your application, it should run some sort of script to pull and unpack/install that package on the instance < this is the basic concept, there are many tools that can help you achieve this, for example, Chef, or AWS CloudFormation.
So essentially, Step 1 should pull the code, create the package and store it in some repository available to your application servers > this can be done offline.
Second Question: How to run other tasks such as updating database schema.
As mentioned above, this can also be part of your 'deployment' automation, so if you are using Chef or even a simple bash script, it can update a database schema before unpacking the new code, this really depends on your database, how you manage it, and who orchestrates the deployment.
For example, you could have a Jenkins job that pulls the new schema and updates your database when ever you rollout a release.
Your third question can be solved by Packer, it can spin up instances, create an AMI, and terminate the instance.
Read more into CICD, and CICD related tools.
I know this has been partially answered in a bunch of places, but the answers are so.. all over the map, dated and not well explained. I'm looking the best practice as of February 2016.
The setup:
A PHP-based RESTful application service that lives in an EC2 instance. The EC2 instance uses S3 for uploaded user data (image files), and RDS MySql for its DB (these two points aren't particularly important.)
We develop in PHPStorm, and our source control is GitHub. When we deploy, we just use PHPStorm's built-in SFTP deployment to upload files directly to the EC2 instance (we have one instance for our Staging environment, and another for our Production environment). I deploy to Staging very often. Could be 20 times a day. I just click on a file in PHPStorm and say 'deploy to Staging', which does the SFTP transfer. Or, I might just click on the entire project and click 'deploy to Staging' - certain folders and files are excluded from the upload, which is part of PHPStorm's deployment configuration.
Recently, I put our EC2 instance behind a Load Balancer. I did this so that I can take advantage of Amazon's free SSL offering via the Certificate Manager, which does not support individual EC2 instances.
So, right now, there's a Load Balancer with only a single EC2 instance behind it. I maintain an Elastic IP pointing to the EC2 instance so that I can access it directly (see my current deployment method above).
Question:
I have not yet had the guts to create additional (clone) EC2 instances behind my Load Balancer, because I'm not sure how I should be deploying to them. A few ideas came to mind, but they're all pretty hacky.
Given the scenario above, what is currently the smoothest and best way to A) quickly deploy a codebase to a set of EC2 instances behind a Load Balancer, and B) actually 'clone' my current EC2 instance to create additional instances.
I haven't been able to really paint a clear picture of the above in my head yet, despite the fact that I've gone over a few (highly technical) suggestions.
Thanks!
You need to treat your EC2 instance as 100% dispensable. Meaning, that it can be terminated at any time and you should not care. A replacement EC2 instance would start and take over the work.
There are 3 ways to accomplish this:
Method 1: Each deployment creates a new AMI image.
When you deploy you app, you deploy it to a worker EC2 instance whose sole purpose is for "setup" of your app. Once the new version is deployed, you create a fresh AMI image from the EC2 instance and update your Auto Scaling launch configuration with the new AMI image. The old EC2 instances are terminated and replaced with the new code.
New EC2 instances have the recent code already on them so they're ready to be added to the load balancer.
Method 2: Each deployment is done to off-instance storage (like Amazon S3).
The EC2 instances will download the recent code from Amazon S3 and install it on boot.
So to put the new code in action, you terminate the old instances and new ones are launched to replace them which start using the new code.
This could be done in a rolling-update fashion, or as a blue/green deployment.
Method 3: Similar to method 2, but this time the instances have some smarts and can be signaled to download and install the code.
This way, you don't need to terminate instances: the existing instances are told to update from S3 and they do so on their own.
Some tools that may help include:
Chef
Ansible
CloudFormation
Update:
Methods 2 & 3 both start with a "basic" AMI which is configured to pull the webpage assets from S3. This AMI is not changed from version-to-version of your website.
For example, the AMI can have Apache and PHP already installed and on boot it pulls the .php website assets from S3 and puts them in /var/www/html.
CloudFormation works well for this. In addition, for method 3, you can use cfn-hup to wait for update signals. When signaled, it'll pull updated assets from S3.
Another possibility is using Elastic Beanstalk which could be used to manage all of this for you.
Update:
To have your AMI image pull from Git, try the following:
Setup an EC2 instance with everything installed that you need to have installed for your web app
Install Git and setup a local repo ready to Git pull.
Shutdown and create an AMI of your instance.
When you deploy, you do the following:
Git push to GitHub
Launch a new EC2 instance, based on your AMI image.
As part of the User Data (specified during the EC2 instance launch), specify something like the following:
#!/bin/sh
cd /git/app
git pull
; copy files from repo to web folder
composer install
When done like this, that user data acts as a script which will run on first boot.
When the app autoscale, the previous modifications made in first instance will be kept in new instances?
You will need to add configuration settings to your application archive so that each instance is configured the same way when it is brought online. This is done by creating a folder in your application called .ebextensions. You place files in that folder with the .config extenstion. These should be yaml format.
Check these docs for more information:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers.html
Linux specific (I assume Linux since you mention SSH):
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html
No, elastic beanstalk will start a new server using a fresh AMI and your latest deployed application code.
It is considered bad practice to change the instance using SSH login, as it may be replaced at any time by Elastic Beanstalk.
If you'd like to change something in the instance, you can either use a custom AMI (not fun) or create an .ebextensions folder and put some configuration shell scripts there (see documentation).