Old data still coming from EC2 instance in Auto Scaling group - amazon-web-services

I have created ELB (load Balancing) and Auto Scaling Group (ASG) after instance coming 'this is my 1st instance working fine after I have login via putty and changed the "index.html" file updated index.html ("This is my 2nd Instance updated') file then deleted instance, new instance coming old data only coming -- Here how to recover new data I mean updated instance,
I want "This is my 2nd Instance updated"

I'm not sure I followed but I think what you're saying is:
You manually SSH to the instance and update index.html, then AutoScaling launches a new instance and it doesn't have those changes. Is that correct?
AutoScaling has no way of knowing what you did inside the instance. AWS can't peak inside your instances and look at your data, that would be a pretty big privacy breach.
When you make updates you need to modify the launch template or launch configuration in some way to make sure the new instances have the updates. Either:
Make a new AMI with the changes
Put the changes in a userdata script
Alternatively, you could have some sort of automation help with this such as:
Have a userdata script which downloads the newest files from an S3 bucket. Just make sure to update the website files in this bucket whenever you make changes
Use some sort of CI/CD pipeline tool such as AWS CodeDeploy. This will automatically push changes to your existing instances, and make sure new instances are launched with the newest code.

Related

Create a testing copy for EC2 + EB instance

I've my application setup on AWS (EB and EC2). My database is PostgreSQL and it is stored in the EBS service provided by AWS.
I'm going to push a major change to my application (including invasive migrations), to ensure that I don't end up losing data I want to create a copy of my whole application and update the code for that.
The steps I have till now:
Clone an EB instance
Create a snapshot of my EBS and use that to create a new volume
Update the configuration settings of my EB instance to point to the new volume and deploy the new code to the EB instance
I can't find proper documentation for how to do these things on AWS so I'm looking for some confirmation about the steps I have ensure that I don't end up wrecking something.
So the way it works, you create snapshot, create a new EC2 with Disk restored from that snapshot and you have a new EC2 running instance with same DB.
But I would suggest if possible stop the postgresql or the instance before taking a snapshot, this will ensure the state of the DB is intact.
The two EC2 instances will have no relation and changes made in one DB will not impact the other.

Modify file on EC2 via Cloudformation-Update stack

I have used Cloudformation template to create my EC2 instance. In this instance there is a file
in home directory that I copy from S3 while creating stack.
I have that file stored locally as well. Now, I modify that file locally and want to copy it to S3
and from S3 to EC2 instance.
I want to automate this process through Cloudformation. So that, whenever I modify this file locally,
after doing update stack, it uploads the modified file to S3 and from S3 to my EC2 instance.
Can anyone please help how this can be achieved?
One thing that comes to mind (bearing in mind the application specific nature of what you are trying to do) is using ECS instead of just EC2.
Note: This may be overkill, but it would work. Also if updates were extremely frequent this would be a major pain so just uploading the file to S3 with a script alongside update-stack (if the update-stack is even necessary) and then polling for changes to that S3 file in your EC2 application would be fine.
Anyway, this is a pattern we use when we are doing something like training a model with new data, which we then wish to deploy to AWS, replacing an application with an older version of the model.
You build the Docker image locally and your special file gets included inside the container. You push the Docker image to DockerHub or AWS ECS Registry or wherever. You update the cloudformation template ECS configuration to use the tag of this new Docker image and update the stack. Then ECS pulls this new image and the new Docker container(s) take the place of the old one(s) and it will have your special file inside it.
I know of at least one way: set up your EC2 in an AutoScaling Group (ASG). Then, on creation, use cfn-init on your UserData and have it fetch the file under sources. Use both Creation and Update policies. On update, have the WillReplace attribute set to true under AutoScalingReplacingUpdate. When you update, CloudFormation will create a new ASG with your fresh new file copy. If you signal success on your update, it will remove the previous instance and ASG, giving you an immutable infrastructure setup. Put everything behind a load balancer, and you got yourself a highly available blue/green deployment too.

Deploying to EC2 instances behind a load balancer; PHPStorm + GitHub

I know this has been partially answered in a bunch of places, but the answers are so.. all over the map, dated and not well explained. I'm looking the best practice as of February 2016.
The setup:
A PHP-based RESTful application service that lives in an EC2 instance. The EC2 instance uses S3 for uploaded user data (image files), and RDS MySql for its DB (these two points aren't particularly important.)
We develop in PHPStorm, and our source control is GitHub. When we deploy, we just use PHPStorm's built-in SFTP deployment to upload files directly to the EC2 instance (we have one instance for our Staging environment, and another for our Production environment). I deploy to Staging very often. Could be 20 times a day. I just click on a file in PHPStorm and say 'deploy to Staging', which does the SFTP transfer. Or, I might just click on the entire project and click 'deploy to Staging' - certain folders and files are excluded from the upload, which is part of PHPStorm's deployment configuration.
Recently, I put our EC2 instance behind a Load Balancer. I did this so that I can take advantage of Amazon's free SSL offering via the Certificate Manager, which does not support individual EC2 instances.
So, right now, there's a Load Balancer with only a single EC2 instance behind it. I maintain an Elastic IP pointing to the EC2 instance so that I can access it directly (see my current deployment method above).
Question:
I have not yet had the guts to create additional (clone) EC2 instances behind my Load Balancer, because I'm not sure how I should be deploying to them. A few ideas came to mind, but they're all pretty hacky.
Given the scenario above, what is currently the smoothest and best way to A) quickly deploy a codebase to a set of EC2 instances behind a Load Balancer, and B) actually 'clone' my current EC2 instance to create additional instances.
I haven't been able to really paint a clear picture of the above in my head yet, despite the fact that I've gone over a few (highly technical) suggestions.
Thanks!
You need to treat your EC2 instance as 100% dispensable. Meaning, that it can be terminated at any time and you should not care. A replacement EC2 instance would start and take over the work.
There are 3 ways to accomplish this:
Method 1: Each deployment creates a new AMI image.
When you deploy you app, you deploy it to a worker EC2 instance whose sole purpose is for "setup" of your app. Once the new version is deployed, you create a fresh AMI image from the EC2 instance and update your Auto Scaling launch configuration with the new AMI image. The old EC2 instances are terminated and replaced with the new code.
New EC2 instances have the recent code already on them so they're ready to be added to the load balancer.
Method 2: Each deployment is done to off-instance storage (like Amazon S3).
The EC2 instances will download the recent code from Amazon S3 and install it on boot.
So to put the new code in action, you terminate the old instances and new ones are launched to replace them which start using the new code.
This could be done in a rolling-update fashion, or as a blue/green deployment.
Method 3: Similar to method 2, but this time the instances have some smarts and can be signaled to download and install the code.
This way, you don't need to terminate instances: the existing instances are told to update from S3 and they do so on their own.
Some tools that may help include:
Chef
Ansible
CloudFormation
Update:
Methods 2 & 3 both start with a "basic" AMI which is configured to pull the webpage assets from S3. This AMI is not changed from version-to-version of your website.
For example, the AMI can have Apache and PHP already installed and on boot it pulls the .php website assets from S3 and puts them in /var/www/html.
CloudFormation works well for this. In addition, for method 3, you can use cfn-hup to wait for update signals. When signaled, it'll pull updated assets from S3.
Another possibility is using Elastic Beanstalk which could be used to manage all of this for you.
Update:
To have your AMI image pull from Git, try the following:
Setup an EC2 instance with everything installed that you need to have installed for your web app
Install Git and setup a local repo ready to Git pull.
Shutdown and create an AMI of your instance.
When you deploy, you do the following:
Git push to GitHub
Launch a new EC2 instance, based on your AMI image.
As part of the User Data (specified during the EC2 instance launch), specify something like the following:
#!/bin/sh
cd /git/app
git pull
; copy files from repo to web folder
composer install
When done like this, that user data acts as a script which will run on first boot.

Update EC2 AMI root drive

I have an EC2 AMI that I create instances from to be used to execute builds. I now need to modify this AMI because I need an additional program installed on it. What I want to do is make my AMI point at a different snapshot to use as its root drive (a snapshot with the new program installed) and things would be all well and good. But, I can't find a way to do this. Someone from Amazon on the forums said it's not possible, but I'm not so sure. So, I wanted to ask here.
I know I can just take the updated snapshot I want and create an entirely new AMI from it, but this results in a new AMI ID and now I need to go change the AMI ID which my scripts use to launch a new instance. I don't want to do this every time I realize I need a change to my AMI setup.
you can build private AMI from exist ec2 instance (of course you can public it as well). Then you can start a new ec2 instance from "My AMI" with all installed application/package directly.
Take a look on this doc
Amazon Machine Images (AMI)
There are new feature such as Docker introduced by AWS last month, but it is not ready to public to use currently.
Another benefit to create private AMI image is, it will save you much time when you need launch a new instance.
If you need update your configuration file, after new AMI created, I recommend to make a CI trigger (via Jenkins, for example). run a awscli command in script, it will easily update your config file. All these tasks, include create a new AMI, update configuration file, etc can be done under Jenkins/bamboo automatically.
If you are not confident with this way, then think about cloudformation template. it will make big improvement in your system. But if you fully set it, the cloudformation way will save you a lot of time in future changes.
In Cloudformation, you need set launch configuration and its autoscaling group, you still need create private AMI image, but every time after create new image, you need trigger a script to update the ami image in its Launch configuration, after that, any new instance will automatically use the new AMI.

AWS Elastic Beanstalk Backup & Recovery

I am new to AWS EB and I am trying to figure out how to backup and restore an entire EB environment. I created an AMI based on the EC2 instance generated by EB, and took a snapshot of RDS, also created by EB.
The problem I have is, how do I restore it, assuming that this is the correct approach of backup. Also, I am doing it manually, shouldn't there be an automated way of doing this within EB? By the way, when I created the AMI, it destroyed the source and the EB just created a new EC2 instance without all my changes.
How do I save & restore configuration changes to my application that impact both filesystem and database?
Unfortunately, Amazon AWS Elastic Beanstalk (EB) does not support restoring databases that contain live data, if those databases were created with EB. If you reload (AKA AWS "deploy") the EB saved configuration, you get a blank database!
I called them and they told me to create the RDS DB separately and update the application code to connect to the DB once you know it's name. If you restore the RDS DB it will have a new name too! So you have to update your code again to connect to it.
Also, if you code and environment is fine, but you want to restore your database, again it will have a new name and you will need to change your code.
How to change your code easily and automatically deploy it is a whole other question for which I don't have an answer yet.
So basically the RDS DB provisioning within Elastic Beanstalk has very limited uses, maybe coding and debugging and testing, but not live production use. :(
This is as of Jan 2015.
First go into your EB environment and save the current config. You should go to a running EC2 instance created by EB and make an Image. Then use that new AMI ID by going to the EB configuration and setting it. It will rebuild the environment tearing down all running instances and creating new ones.
For your RDS instance you should make a backup and restore with a new instance name as the docs say you will lose it if the environment is destroyed. You should probably just manually set the environment variables like RDS does and setup the proper security groups between RDS and EC2.
One option I think could work is just renaming the RDS instance name as the environment seems to break and then destroy the environment and create a new one with an attached RDS instance and then destroy that one and rename the old one to the new one's name which may work.
As always make proper backups before proceeding with any of these ideas.