Seamless switching of an application from one AMI to another AMI - amazon-web-services

I am having my OpenDJ LDAP setup running on ubuntu 16.04 base AMI. I now want to replace the base AMI with new patched AMI without impacting my working OpenDJ setup. I need to do this everytime a new AMI is released. One way I can think of is to spin a new EC2 instance with new AMI, export the data from existing LDAP and import it into new EC2 instance. But I am wondering if there is better and smarter way to do this automatically. How do I switch an application from one AMI/EC2 instance to another AMI/EC2 instance without redoing the configuration or breaking its functioning?

Create an EFS file system to be designated for back-end database files (eg /opt/ds)
Install DS/OpenDJ so that the instance files are separate to the install files. (See quote from this link below)
For each new instance launch the AMI with the updated software as needed.
In the user data script for the instance, you attach the instance data folder from Step 1.
The purpose of this article is to provide information on installing
DS/OpenDJ so that the instance files (user data) are separate to the
install files (binaries). This setup allows you to separate all your
backend database files and configuration in a separate file system to
your binaries
The approach will isolate application data from software binaries, and allow you to easily switch AMIs

Related

AWS Cloud formation does not copy the data to the newly created stack

In AWS cloud formation, i use the cloud former tool. I can use that tool to create a cloud formation template from existing resources. And then use the template to create a stack. I tested with that tool. It can work, (as in it can create instances with same memory size, with same volume size, same VPC settings, and auto start the instances). But there is no files in the volume.
Do i have to create a snapshot of the existing volume, create a new volume from the snapshot, attach it to the newly created instance, and copy the files manually ?
Or is there any better way ?
Do i have to create a snapshot of the existing volume, create a new volume from the snapshot, attach it to the newly created instance, and copy the files manually ?
Cloudformation is provisioning resources, but is not responsible for provisioning the contents of those resources - that you have to do yourself.
You can leverage the EC2 Userdata to manually pull files from S3 or other repos as the instance boots.
Or is there any better way ?
If you want to share data between applications, EFS is always an option. In your case, though, using Userdata might be effective.
If you wish to launch new EC2 instances with software automatically loaded, there are basically two choices:
Use a pre-configured AMI, or
Use a startup script to load the software
Pre-configured AMI
An Amazon Machine Image (AMI) is a copy of a disk. When a new EC2 instance is launched, an AMI is selected and the boot disk (and optionally other disks) are automatically pre-loaded with the contents of the AMI.
A common practice is to boot an EC2 instance and configure it as desired. Then, create an AMI. Thereafter, when a new EC2 instance is required for the application, launch it using the pre-built AMI.
There are also tools available to automate the building of an AMI, such as Netflix Aminator and Packer.
Benefits: New machine boots quickly, fully-configured.
Issues: Need to create a new AMI whenever you update your software.
Use a startup script to load the software
When an Amazon EC2 instance is launched from a standard Amazon-provided AMI (eg Amazon Linux, Microsoft Windows), software on the AMI automatically looks at the User Data passed to an EC2 instance. If the User Data contains a startup script, the script will be executed -- but only the first time that an instance is launched. This is an excellent way to install software on the instance.
You are responsible for writing the script. The script should install whatever tools, software and data you want on the instance.
Benefits: Updating your software? Just launch a new instance and the script will install the latest version of your software (assuming you have written the script to always point to the latest version).
Issues: It takes longer to launch the new instance, since the software is being installed.

How do I copy varying filenames into Vagrant box (in a platform independent way)?

I'm trying to use Vagrant to deploy to AWS using the vagrant-aws plugin.
This means I need a box, then I need to add a versioned jar (je.g. myApp-1.2.3-SNAPSHOT.jar) and some statically named files. This also need to be able to work on Windows or Linux machines.
I can use config.vm.synced_folder locally with a setup.sh to move the files I need using wildcards (e.g. cp myApp-*.jar) but the plugin only supports rsync, so only Linux.
TLDR; Is there a way to copy files using wildcards in Vagrant
This means I need a box
Yes and No. vagrant heavily relies on the box concept but in the context of AWS provider, the box is a dummy box. the system will look at the aws.* variable to connect to your account.
vagrant will spin an ec2 instance and will connect to it, you need to make sure the instance will be linked with a security group that allows the connection and open the port to your IP (at minimum)
if you are running a provisioner, please note the script is run from the ec2 instance not from your local.
what I suggest is the following:
- copy the jar files that are necessary on s3 or somewhere the ec2 instance can easily access them
- run the provisioner to fetch the files from this source (s3)
- let it go.
If you have quick turnaround of files in development mode, you can push to a git repo that the ec2 instance can pull the files and deploy the jar directly

Deploying to EC2 instances behind a load balancer; PHPStorm + GitHub

I know this has been partially answered in a bunch of places, but the answers are so.. all over the map, dated and not well explained. I'm looking the best practice as of February 2016.
The setup:
A PHP-based RESTful application service that lives in an EC2 instance. The EC2 instance uses S3 for uploaded user data (image files), and RDS MySql for its DB (these two points aren't particularly important.)
We develop in PHPStorm, and our source control is GitHub. When we deploy, we just use PHPStorm's built-in SFTP deployment to upload files directly to the EC2 instance (we have one instance for our Staging environment, and another for our Production environment). I deploy to Staging very often. Could be 20 times a day. I just click on a file in PHPStorm and say 'deploy to Staging', which does the SFTP transfer. Or, I might just click on the entire project and click 'deploy to Staging' - certain folders and files are excluded from the upload, which is part of PHPStorm's deployment configuration.
Recently, I put our EC2 instance behind a Load Balancer. I did this so that I can take advantage of Amazon's free SSL offering via the Certificate Manager, which does not support individual EC2 instances.
So, right now, there's a Load Balancer with only a single EC2 instance behind it. I maintain an Elastic IP pointing to the EC2 instance so that I can access it directly (see my current deployment method above).
Question:
I have not yet had the guts to create additional (clone) EC2 instances behind my Load Balancer, because I'm not sure how I should be deploying to them. A few ideas came to mind, but they're all pretty hacky.
Given the scenario above, what is currently the smoothest and best way to A) quickly deploy a codebase to a set of EC2 instances behind a Load Balancer, and B) actually 'clone' my current EC2 instance to create additional instances.
I haven't been able to really paint a clear picture of the above in my head yet, despite the fact that I've gone over a few (highly technical) suggestions.
Thanks!
You need to treat your EC2 instance as 100% dispensable. Meaning, that it can be terminated at any time and you should not care. A replacement EC2 instance would start and take over the work.
There are 3 ways to accomplish this:
Method 1: Each deployment creates a new AMI image.
When you deploy you app, you deploy it to a worker EC2 instance whose sole purpose is for "setup" of your app. Once the new version is deployed, you create a fresh AMI image from the EC2 instance and update your Auto Scaling launch configuration with the new AMI image. The old EC2 instances are terminated and replaced with the new code.
New EC2 instances have the recent code already on them so they're ready to be added to the load balancer.
Method 2: Each deployment is done to off-instance storage (like Amazon S3).
The EC2 instances will download the recent code from Amazon S3 and install it on boot.
So to put the new code in action, you terminate the old instances and new ones are launched to replace them which start using the new code.
This could be done in a rolling-update fashion, or as a blue/green deployment.
Method 3: Similar to method 2, but this time the instances have some smarts and can be signaled to download and install the code.
This way, you don't need to terminate instances: the existing instances are told to update from S3 and they do so on their own.
Some tools that may help include:
Chef
Ansible
CloudFormation
Update:
Methods 2 & 3 both start with a "basic" AMI which is configured to pull the webpage assets from S3. This AMI is not changed from version-to-version of your website.
For example, the AMI can have Apache and PHP already installed and on boot it pulls the .php website assets from S3 and puts them in /var/www/html.
CloudFormation works well for this. In addition, for method 3, you can use cfn-hup to wait for update signals. When signaled, it'll pull updated assets from S3.
Another possibility is using Elastic Beanstalk which could be used to manage all of this for you.
Update:
To have your AMI image pull from Git, try the following:
Setup an EC2 instance with everything installed that you need to have installed for your web app
Install Git and setup a local repo ready to Git pull.
Shutdown and create an AMI of your instance.
When you deploy, you do the following:
Git push to GitHub
Launch a new EC2 instance, based on your AMI image.
As part of the User Data (specified during the EC2 instance launch), specify something like the following:
#!/bin/sh
cd /git/app
git pull
; copy files from repo to web folder
composer install
When done like this, that user data acts as a script which will run on first boot.

Elastic Beanstalk maintain the changes made directly via ssh for new instances?

When the app autoscale, the previous modifications made in first instance will be kept in new instances?
You will need to add configuration settings to your application archive so that each instance is configured the same way when it is brought online. This is done by creating a folder in your application called .ebextensions. You place files in that folder with the .config extenstion. These should be yaml format.
Check these docs for more information:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers.html
Linux specific (I assume Linux since you mention SSH):
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html
No, elastic beanstalk will start a new server using a fresh AMI and your latest deployed application code.
It is considered bad practice to change the instance using SSH login, as it may be replaced at any time by Elastic Beanstalk.
If you'd like to change something in the instance, you can either use a custom AMI (not fun) or create an .ebextensions folder and put some configuration shell scripts there (see documentation).

Update EC2 AMI root drive

I have an EC2 AMI that I create instances from to be used to execute builds. I now need to modify this AMI because I need an additional program installed on it. What I want to do is make my AMI point at a different snapshot to use as its root drive (a snapshot with the new program installed) and things would be all well and good. But, I can't find a way to do this. Someone from Amazon on the forums said it's not possible, but I'm not so sure. So, I wanted to ask here.
I know I can just take the updated snapshot I want and create an entirely new AMI from it, but this results in a new AMI ID and now I need to go change the AMI ID which my scripts use to launch a new instance. I don't want to do this every time I realize I need a change to my AMI setup.
you can build private AMI from exist ec2 instance (of course you can public it as well). Then you can start a new ec2 instance from "My AMI" with all installed application/package directly.
Take a look on this doc
Amazon Machine Images (AMI)
There are new feature such as Docker introduced by AWS last month, but it is not ready to public to use currently.
Another benefit to create private AMI image is, it will save you much time when you need launch a new instance.
If you need update your configuration file, after new AMI created, I recommend to make a CI trigger (via Jenkins, for example). run a awscli command in script, it will easily update your config file. All these tasks, include create a new AMI, update configuration file, etc can be done under Jenkins/bamboo automatically.
If you are not confident with this way, then think about cloudformation template. it will make big improvement in your system. But if you fully set it, the cloudformation way will save you a lot of time in future changes.
In Cloudformation, you need set launch configuration and its autoscaling group, you still need create private AMI image, but every time after create new image, you need trigger a script to update the ami image in its Launch configuration, after that, any new instance will automatically use the new AMI.