Pretty new with Amazon AWS. I've inherited the setup from another dev, and I've never used AWS before. Need some help with copy and replacing instances. Pretty much there's a dev and a production instance. I need to backup the dev instance and then replace the dev instance with the current production instance.
What I've done so far:
Created Image of test instance. SO it's currently under AMIs.
What I need:
Instructions on how to replace the current dev instance with the production one.
Additional Info:
Both instances have different Public DNS and Public IP.
Anyone got any quick instructions on how to accomplish this?
Thanks!
The 'quick' instructions are to simply create an AMI of the production image (shut it down first), then you use that saved AMI as the basis for creating a new instance. You'll end up with 3 instance, and you can delete your old dev instance once you confirm the new copy of production came up as you wanted - you don't actually 'replace' the dev instance, (you create new and delete the old).
Lots of links to be found for this, but here is one:
http://docs.aws.amazon.com/AWSToolkitVS/latest/UserGuide/tkv-create-ami-from-instance.html
Related
I know this has been partially answered in a bunch of places, but the answers are so.. all over the map, dated and not well explained. I'm looking the best practice as of February 2016.
The setup:
A PHP-based RESTful application service that lives in an EC2 instance. The EC2 instance uses S3 for uploaded user data (image files), and RDS MySql for its DB (these two points aren't particularly important.)
We develop in PHPStorm, and our source control is GitHub. When we deploy, we just use PHPStorm's built-in SFTP deployment to upload files directly to the EC2 instance (we have one instance for our Staging environment, and another for our Production environment). I deploy to Staging very often. Could be 20 times a day. I just click on a file in PHPStorm and say 'deploy to Staging', which does the SFTP transfer. Or, I might just click on the entire project and click 'deploy to Staging' - certain folders and files are excluded from the upload, which is part of PHPStorm's deployment configuration.
Recently, I put our EC2 instance behind a Load Balancer. I did this so that I can take advantage of Amazon's free SSL offering via the Certificate Manager, which does not support individual EC2 instances.
So, right now, there's a Load Balancer with only a single EC2 instance behind it. I maintain an Elastic IP pointing to the EC2 instance so that I can access it directly (see my current deployment method above).
Question:
I have not yet had the guts to create additional (clone) EC2 instances behind my Load Balancer, because I'm not sure how I should be deploying to them. A few ideas came to mind, but they're all pretty hacky.
Given the scenario above, what is currently the smoothest and best way to A) quickly deploy a codebase to a set of EC2 instances behind a Load Balancer, and B) actually 'clone' my current EC2 instance to create additional instances.
I haven't been able to really paint a clear picture of the above in my head yet, despite the fact that I've gone over a few (highly technical) suggestions.
Thanks!
You need to treat your EC2 instance as 100% dispensable. Meaning, that it can be terminated at any time and you should not care. A replacement EC2 instance would start and take over the work.
There are 3 ways to accomplish this:
Method 1: Each deployment creates a new AMI image.
When you deploy you app, you deploy it to a worker EC2 instance whose sole purpose is for "setup" of your app. Once the new version is deployed, you create a fresh AMI image from the EC2 instance and update your Auto Scaling launch configuration with the new AMI image. The old EC2 instances are terminated and replaced with the new code.
New EC2 instances have the recent code already on them so they're ready to be added to the load balancer.
Method 2: Each deployment is done to off-instance storage (like Amazon S3).
The EC2 instances will download the recent code from Amazon S3 and install it on boot.
So to put the new code in action, you terminate the old instances and new ones are launched to replace them which start using the new code.
This could be done in a rolling-update fashion, or as a blue/green deployment.
Method 3: Similar to method 2, but this time the instances have some smarts and can be signaled to download and install the code.
This way, you don't need to terminate instances: the existing instances are told to update from S3 and they do so on their own.
Some tools that may help include:
Chef
Ansible
CloudFormation
Update:
Methods 2 & 3 both start with a "basic" AMI which is configured to pull the webpage assets from S3. This AMI is not changed from version-to-version of your website.
For example, the AMI can have Apache and PHP already installed and on boot it pulls the .php website assets from S3 and puts them in /var/www/html.
CloudFormation works well for this. In addition, for method 3, you can use cfn-hup to wait for update signals. When signaled, it'll pull updated assets from S3.
Another possibility is using Elastic Beanstalk which could be used to manage all of this for you.
Update:
To have your AMI image pull from Git, try the following:
Setup an EC2 instance with everything installed that you need to have installed for your web app
Install Git and setup a local repo ready to Git pull.
Shutdown and create an AMI of your instance.
When you deploy, you do the following:
Git push to GitHub
Launch a new EC2 instance, based on your AMI image.
As part of the User Data (specified during the EC2 instance launch), specify something like the following:
#!/bin/sh
cd /git/app
git pull
; copy files from repo to web folder
composer install
When done like this, that user data acts as a script which will run on first boot.
Hi this is a very noob question, but I am trying to deply my Node JS API server on AWS.
Everything is working fine with one m1.large instance that my Front End running on S3 connects to.
Now I want to Scale and put my EC2 instance and possibly many more behing and ELB and an Auto Scaling Group.
Do I need to duplicate my server code on every EC2 instance?
If so , I assume I'll have to create a seperate DB server which all of the EC2 instances will connect to.
Am I right,anyone experienced in Amazon AWS can answer this, I tried googling but most of the links point to detailed tutorials which however don't answer my question.
Any help would be much appreciated. Thanks
yep. that's basically correct. the code needs to be on all instances fronted by the load balancer. for the database you may want to look into RDS.
Of course NOT.. But sure you can do..
That's why there are EFS volumes, which are shared volumes to more than one EC2 instance, but you have to choose a region that support them since they are available on certain regions. As a candidate AWS certified architect I would recommend you more than two options.
You can follow your first approach and create an EC2 instance put your code inside and then create an AMI and use this AMI to launch your upcoming EC2s through autoscaling group. In my opinion bad decision since on any code change you have to go on each one and put the new code and then create a new AMI and a new Auto scaling configuration..Lot's of stuff to do, but it will work.
Second approach, following the first approach but do not create an AMI, instead upload your code on a private (I suppose) Repo like github, bitbucket, install SSM and the appropriate roles for managing EC2 and on every code changes push them to repo and pull them on your EC2, using SSM. Of course you may write a webhook to bitbucket to call an api and run the git pull command on each EC2. Probably the last sentence could be a third approach but needs more coding!!!
Last but not least!! Use an EFS volume put your code there, mount this volume on your EC2, add a auto mount command on every boot, alter your apache httpd main document to point on this EFS/folder and create an AMI with this configuration. Voila! every new EC2 will use the same code which located on this shared/network volume. Whenever you need to change something you have to log in on a third instance outside of your autoscaling group for a certain amount of time upload your changes and then turn it off and all of your EC2 will take immediately the new code. Of course you may pull the changes from a repo following the third approach.
Maybe there are more approaches, I'm using the third one with private repos of course and until now I haven't faced any problem (Fingers crossed)!
One other option is to use Elastic Beanstalk to Deploy NodeJs applications. Here is the guide specific to NodeJs. This will take care of most of the stuff which you would need to do otherwise if you only use EC2 For example: ELB, Autoscaling Cloudwatch etc.
For Database, you may want to use the Master Slave with Read Replicas. Another option is to evaluate NoSql Databases like DynamoDB if it fits your use case. The scalability of DynamoDB tables is managed by AWS so you dont need to worry about it.
For some time ago (years) I deployed a website made with Django in AWS, now I'm trying to make some changes in just one .html file but I don't remember how to connect to my instance anymore, I tried to look at the documentation from AWS but there is too many changes and also they added a lot of new components, that I don't know where to start and I also don't remember correctly how it worked.
Unfortunately all the private keys, password and this kind of stuff are in a old computer that I don't even have anymore. The only thing that I have is the username, password from my AWS console and a Mac.
I would appreciate if you can point me out where to start!
Thank you!
You have a few options. Personally, I would find the EC2 instance your site is hosted on and create an image of it. This can be done by selecting your instance in the EC2 console. This will create an Amazon Machine Imagine (AMI). You can then launch a new EC2 instance from this AMI and specify a new PEM key. Once the instance is launched, you can connect to the instance and edit the .html file.
you can create an AMI (Amazon Machine Image) of any running EC2 servers (where your application likely lives).
After creating an AMI, you can spin up a new EC2 instance using that AMI. The new one will be a clone of the old one, but you can specify/download a new keypair.
From there you can log in via ssh, and I'd recommend putting things in source control afterwards.
A lot of the rest depends on your DNS settings... if you're using an Elastic IP you can just assign it to the new machine. If you're using an Elastic Load Balancer you can simply move the old one off, and the new one on.
I have amazon ec2 instance with mapped to 1 volume of data.
This instance running my http and have my server code.
now i have to scale my app with creating new instance and load balancing.
But if i create new instance with cloning existing instance how can the code and http vhost file will be in sync.
Using snapshot i close instance first time.
But i want when one instance i upload my code that should sync with other instance also.
How can i achieve this? should i need to configure rsync from 1 instance to another instance?
"Baking" custom AMIs is a very simple way to do this. Start a new instance from your AMI (start with a snapshot of your current instance), make changes on it like update application/configurations/system, test, create new AMI from it, start new instances from that new AMI, test them and then swap old instance in the ELB with the new ones.
There are also many tools you can use to automate your application deployment like Puppet, Chef or one of Amazons offerings: CodeDeploy, OpsWorks, Elastic Beanstalk and I recommend you use one such tool eventually.
From your description you cloned your first web server (www1) to make a second web server (www2).
Now when you make code edits you want the code to be in sync between the two webservers.
Rsync can help to make that easy.
From the 2nd web server (www2)
rsync -chavzP --stats username#IPorNAMEofwww1:/path/to/copy/on/www1 /path/to/putfiles/on/www2
Once you get that working from the command line. Add it to a cronjob so it syncs on a schedule (hourly). It should only sync the changes, not every file.
I am new to AWS EB and I am trying to figure out how to backup and restore an entire EB environment. I created an AMI based on the EC2 instance generated by EB, and took a snapshot of RDS, also created by EB.
The problem I have is, how do I restore it, assuming that this is the correct approach of backup. Also, I am doing it manually, shouldn't there be an automated way of doing this within EB? By the way, when I created the AMI, it destroyed the source and the EB just created a new EC2 instance without all my changes.
How do I save & restore configuration changes to my application that impact both filesystem and database?
Unfortunately, Amazon AWS Elastic Beanstalk (EB) does not support restoring databases that contain live data, if those databases were created with EB. If you reload (AKA AWS "deploy") the EB saved configuration, you get a blank database!
I called them and they told me to create the RDS DB separately and update the application code to connect to the DB once you know it's name. If you restore the RDS DB it will have a new name too! So you have to update your code again to connect to it.
Also, if you code and environment is fine, but you want to restore your database, again it will have a new name and you will need to change your code.
How to change your code easily and automatically deploy it is a whole other question for which I don't have an answer yet.
So basically the RDS DB provisioning within Elastic Beanstalk has very limited uses, maybe coding and debugging and testing, but not live production use. :(
This is as of Jan 2015.
First go into your EB environment and save the current config. You should go to a running EC2 instance created by EB and make an Image. Then use that new AMI ID by going to the EB configuration and setting it. It will rebuild the environment tearing down all running instances and creating new ones.
For your RDS instance you should make a backup and restore with a new instance name as the docs say you will lose it if the environment is destroyed. You should probably just manually set the environment variables like RDS does and setup the proper security groups between RDS and EC2.
One option I think could work is just renaming the RDS instance name as the environment seems to break and then destroy the environment and create a new one with an attached RDS instance and then destroy that one and rename the old one to the new one's name which may work.
As always make proper backups before proceeding with any of these ideas.