I seem to not understand one central point of AWS Autoscaling:
I created an AMI of an (Ubuntu) EC2 instance having my web-server installed, and I use this AMI as the launch-configuration for my Autoscaling group.
But when Autoscaling decides to Spin up a new instance, how am I supposed to launch my webserver on that instance. Am I supposed to write some start-up scripts or what is best practice to start a process of a newly spun-up instance from Autoscaling?
When I deploy an application (PostgreSQL, Elasticsearch, whatever) to an EC2 instance, I usually do it intending on being able to repeat the process. So my first step is to create an initial deployment script which will do as much of the install and setup process as possible, without needing to know the IP address, hostname, amount of memory, number of processors, etc. Basically, as much as I can without needing to know anything that can change from one instance to the next or upon shutdown / restart.
Once that is stable, I create an AMI of it.
I then create an initialization script which I use in the launch-configuration and have it execute that script on the previously created AMI.
That's for highly configured applications. If you're just going with default settings (e.g. IP address = 0.0.0.0), than yes, I would simply set 'sudo update-rc.d <> defaults 95 10', so that it runs on startup.
Then create the AMI. When you create a new instance from that AMI, the webserver should startup by default. If it doesn't, I would look at whether or not you really set the init.d script to do so.
Launching a new instance from an AMI should be no different than booting up a previously shutdown instance.
By the way, as a matter of practice when creating these scripts, I also do a few things to make things much cleaner for me:
1) create modules in separate bash scripts (e.g. creating user accounts, set environment variables, etc) for repeatability
2) Each deployment script starts by downloading and installing the AWS CLI
3) Every EC2 instance is launched with an IAM role that has S3 read access, IAM SSH describe rights, EC2 Address allocation/association, etc.
4) Load all of the scripts onto S3 and then have the deployment / initialization scripts download the necessary bash module scripts, chmod +x and execute them. It's as close to OOP I can get without overdoing it, but it creates really clean bash scripts. The top level launch / initialization scripts for the most part just download individual scripts from S3 and execute them.
5) I source all of the modules instead of simply executing them. That way bash shares variables.
6) Make the linux account creation part of the initialization script (not the AMI). Using the CLI, you can query for users, grep for their public SSH key requested from AWS, create their accounts and have it ready for them to login automagically.
This way, when you need to change something (i.e. change the version of an application, change a configuration, etc.) you simply modify the module script and if it changes the AMI, re-launch and re-AMI. Otherwise, if it just just changes the instance-specifically, than just launch the AMI with the new initialization script.
Hope that helps...
Related
I am creating a AWS EC2 launch template that includes commands within the User Data field to perform actions when the instance is first launched (package updates, install software, format EBS volumes, etc). In addition to this I also want to perform tasks on reboot or subsequent starting of the instance, such as mounting existing EBS volumes and configuring and mounting volatile SSD volumes. I see that I can use MIME-type to have code run when instance restarts here:
https://aws.amazon.com/premiumsupport/knowledge-center/execute-user-data-ec2/
So, I can clearly modify User Data after I initially launch the instance, but this is cumbersome as it likely needs manual intervention or requires waiting for the instance to have executed the initial User Data code that runs on initialization of the instance.
My question is:
Can the multi-part MIME format be configured to run code that will execute on initialization of the instance and other code that will run every time the instance restarts?
I understand that what you're trying to achieve is passing two sets of commands using Userdata. One set which will be executed on the instance creation and another set which should be executed every reboot. Please lemme know if I misunderstood it.
For the first part, you can use Userdata itself as you already know. For the commands that should run on every reboot, you can leverage rc.local .
The script /etc/rc.local is for use by the system administrator. It is traditionally executed after all the normal system services are started, at the end of the process of switching to a multiuser runlevel, etc. You might use it to start a custom service or for mounting additional Volumes.
To write into /etc/rc.local , you can download the command set from S3 and copy into the file or you can simply echo it. Example:
echo 'echo HelloWorld' >> /etc/rc.local
Hope this helps.
I am able to successfully provision linux ssh users via cloud-init upon initial startup using this guide: https://aws.amazon.com/premiumsupport/knowledge-center/ec2-user-account-cloud-init-user-data/
The issue I am facing is that every time I need to add or remove users, I need to shut down the machine to modify to user-data script. How would I go about creating and removing users while reducing downtime?
If I manually add users (https://aws.amazon.com/premiumsupport/knowledge-center/new-user-accounts-linux-instance/), will those users be removed when the instance restarts and runs the startup script?
You could potentially use EC2 launch templates. Very easy to setup and easy to clone to different instance sizes and classes. Pretty slick how I use them with user data to control environment variables for dev, stage, prod use too!
I know this has been partially answered in a bunch of places, but the answers are so.. all over the map, dated and not well explained. I'm looking the best practice as of February 2016.
The setup:
A PHP-based RESTful application service that lives in an EC2 instance. The EC2 instance uses S3 for uploaded user data (image files), and RDS MySql for its DB (these two points aren't particularly important.)
We develop in PHPStorm, and our source control is GitHub. When we deploy, we just use PHPStorm's built-in SFTP deployment to upload files directly to the EC2 instance (we have one instance for our Staging environment, and another for our Production environment). I deploy to Staging very often. Could be 20 times a day. I just click on a file in PHPStorm and say 'deploy to Staging', which does the SFTP transfer. Or, I might just click on the entire project and click 'deploy to Staging' - certain folders and files are excluded from the upload, which is part of PHPStorm's deployment configuration.
Recently, I put our EC2 instance behind a Load Balancer. I did this so that I can take advantage of Amazon's free SSL offering via the Certificate Manager, which does not support individual EC2 instances.
So, right now, there's a Load Balancer with only a single EC2 instance behind it. I maintain an Elastic IP pointing to the EC2 instance so that I can access it directly (see my current deployment method above).
Question:
I have not yet had the guts to create additional (clone) EC2 instances behind my Load Balancer, because I'm not sure how I should be deploying to them. A few ideas came to mind, but they're all pretty hacky.
Given the scenario above, what is currently the smoothest and best way to A) quickly deploy a codebase to a set of EC2 instances behind a Load Balancer, and B) actually 'clone' my current EC2 instance to create additional instances.
I haven't been able to really paint a clear picture of the above in my head yet, despite the fact that I've gone over a few (highly technical) suggestions.
Thanks!
You need to treat your EC2 instance as 100% dispensable. Meaning, that it can be terminated at any time and you should not care. A replacement EC2 instance would start and take over the work.
There are 3 ways to accomplish this:
Method 1: Each deployment creates a new AMI image.
When you deploy you app, you deploy it to a worker EC2 instance whose sole purpose is for "setup" of your app. Once the new version is deployed, you create a fresh AMI image from the EC2 instance and update your Auto Scaling launch configuration with the new AMI image. The old EC2 instances are terminated and replaced with the new code.
New EC2 instances have the recent code already on them so they're ready to be added to the load balancer.
Method 2: Each deployment is done to off-instance storage (like Amazon S3).
The EC2 instances will download the recent code from Amazon S3 and install it on boot.
So to put the new code in action, you terminate the old instances and new ones are launched to replace them which start using the new code.
This could be done in a rolling-update fashion, or as a blue/green deployment.
Method 3: Similar to method 2, but this time the instances have some smarts and can be signaled to download and install the code.
This way, you don't need to terminate instances: the existing instances are told to update from S3 and they do so on their own.
Some tools that may help include:
Chef
Ansible
CloudFormation
Update:
Methods 2 & 3 both start with a "basic" AMI which is configured to pull the webpage assets from S3. This AMI is not changed from version-to-version of your website.
For example, the AMI can have Apache and PHP already installed and on boot it pulls the .php website assets from S3 and puts them in /var/www/html.
CloudFormation works well for this. In addition, for method 3, you can use cfn-hup to wait for update signals. When signaled, it'll pull updated assets from S3.
Another possibility is using Elastic Beanstalk which could be used to manage all of this for you.
Update:
To have your AMI image pull from Git, try the following:
Setup an EC2 instance with everything installed that you need to have installed for your web app
Install Git and setup a local repo ready to Git pull.
Shutdown and create an AMI of your instance.
When you deploy, you do the following:
Git push to GitHub
Launch a new EC2 instance, based on your AMI image.
As part of the User Data (specified during the EC2 instance launch), specify something like the following:
#!/bin/sh
cd /git/app
git pull
; copy files from repo to web folder
composer install
When done like this, that user data acts as a script which will run on first boot.
I'm looking at autoscaling an off-the-shelf Amazon provided Windows AMI (as opposed to a custom one).
After a new instance gets booted (and before!! it gets added to load balancer), I want some initialization tasks injected and ran on the new instance (say, running a powershell script).
Secondly, the loadbalancer needs to know to only start sending requests to the new EC2 instance after this startup script has finished (not sure how to do this either).
After a new instance gets booted (and before!! it gets added to load balancer), I want some initialization tasks injected and ran on the new instance (say, running a powershell script).
Take a look at userdata. The long and short of it is that you can configure your instances to run a small script when they are launched, which can be used for bootstrapping.
A common practice is to just include enough code in the userdata to download a variable script from a central location (e.g. Git repository or secure S3 bucket), then execute that - this way you can version and change the actual script doing the work without having to update the userdata.
Secondly, the loadbalancer needs to know to only start sending requests to the new EC2 instance after this startup script has finished (not sure how to do this either).
Load balancers tend to perform a kind of ping request for instances one a specified port, for a specified file to determine if it is healthy or not. I'm not sure what your Windows AMI is for, but it should work in the same way. You just need to point it at a file that will only be available once your bootstrapping is complete - i.e. maybe your bootstrapping script creates a health.txt file somewhere in your system with "OK" as its contents, and your load balancer can ping that file until it exists -> then it's healthy.
If you go with the above idea, ensure that health.txt doesn't exist as part of the AMI you use to spin auto-scaled instances up with, otherwise they will be created with that file in place and may register a false positive before your userdata/bootstrapping script has completed.
For example with a LAMP stack, it's reasonably easy to quantify this because an example userdata script might do the following:
Clone a Git repository
Clean a cache
Set some file/folder permissions
Create "health.html" in the Apache web root with "OK" as its contents, so it'll return a 200 header when pinged
Start Apache
In this example, the load balancer will ping the server's /health.html file every so often until it returns a 200 header, which would only occur once step 5 above has completed.
I'm sure you'll be able to work out a similar fit for your Windows AMI.
You can run initialization tasks through powershell in UserData section with
<powershell>./yourscript</powershell>
And, assuming you are using ELB, you can set your 'ping target' to point to a file which is created once the userdata scripts are executed successfully. Until then, ELB won't send traffic to your EC2 instance.
Autoscaling will add to the ELB automatically for you: http://docs.aws.amazon.com/autoscaling/latest/userguide/autoscaling-load-balancer.html
I am running a couple of ubuntu ec2 instances, I want to run an automation script which will pull the code from Github whenever a new instance is booted from the AMI. The thing is presently I am sshing to the server and run the command git pull origin master and it will ask for password key.
How do I automate this process? So after booting the new instance from a AMI this script should:
Run
Pull the code and also the submodule
Create couple of files and configure it
Please help me to achieve it.
Thanks
This will probably take some time and configuring, but this might set you on the right path.
First, setup your ssh keys, so that you can automatically pull from a repo, without a password. Outlined here: https://help.github.com/articles/generating-ssh-keys
Next, create a startup script to issue the 'pull' command from Github. Here: https://help.ubuntu.com/community/UbuntuBootupHowto
Then save your AMI, When you start a new EC2 AMI, the script should run, pulling in your Github changes.
Also to note, make sure gits remote path is using SSH, if it is HTTPS, it will ALWAYS ask for a password.
Your best best would be to utilize the fact the Ubuntu utilizes CloudInit within its canonical image.
Using CloudInit, you can pass scripts (i.e. shell scripts) to execute at various start up stages as EC2 user-data.
It would be very easy for your to make your GIT command line sequence execute from such a script. He is link to documentation, which includes examples.
https://help.ubuntu.com/community/CloudInit
Create a user-password access to your ubuntu instance. Replicate this particular instance if you need multiple. Now you are free of the key access. If you need to automate a process in that instance cron it or send the script via ssh to that instance and let the cron to find and run it.