Replicate changes made on one EC2 to another EC2 Server - amazon-web-services

I have two ec2 servers named Ec2-Webserver-1 and EC2-WebServer-2 inside same VPC under two different subnets served by Application Load Balancer.
When I made small changes to the first servers, Then I have to manually change the another server too. Otherwise I have to create an AMI and create a new server from the AMI.
I think, creating AMI each time when I made little changes is not the appropriate one.
Is there any other tools in AWS or third-party tools that can auto replicate the changes made on Server 1 to Server 2? I am currently using CentOS AMI.

I would suggest look into cloudformation. You can define your ec2, what IAM roles you want it to have and a whole lot of other stuff. Once that is done you can just run the cloudformation script and AWS will provision the EC2 with your defined settings automatically. CloudFormation link

You should be looking into Code Deploy https://aws.amazon.com/codedeploy/getting-started/?nc=sn&loc=4 Possibly combine it with Code Pipeline. Here is a starting point for deciding whether you need one or both. https://forums.aws.amazon.com/thread.jspa?threadID=172485

Related

AWS EC2 Auto scaling philosophy

Hello there!
I'm at beginning of the investigation of AWS, but one of the concepts looks unclear to me. Based on it I want to ask for assistance with an understanding of functionality.
I have a web application on PHP installed on EC2.
My application is huge loaded and I need to use a load balancer for the best performance. How to do and set up this is clear. The Code of my application is hosted on Gitlab.
After EC2 and load balancer setup did I want to use Autoscaling.
So, I need to use the autoscale group.
Main question: what I should do next? As I understand I need somehow create a new instance, but I need a correct image for the instance with all dependencies and source code.
Code auto-deploy is also a big question. When the new feature merged I need to run the GitLab pipeline and delivery code somehow to the new EC2.
So what do I need to read and investigate to have the ability to deploy new code to the new EC2 instance automatically? Is AWS provide some tools for this?
Thank you for the help with my journey.
Regards,
Mavis.
You can begin with this link https://docs.aws.amazon.com/autoscaling/ec2/userguide/create-asg-from-instance.html which explains to you how to create an autoscaling group based on an EC2 instance.
In short you can generate an AMI ( Amazon machine Image) from your current EC2 (host php) and create a launch configuration/launch template for your autoscaling group.
Next, you may add a load balancer to distribute traffic to theses instances, you can associate it with target groups and your Autoscaling goup https://docs.aws.amazon.com/autoscaling/ec2/userguide/attach-load-balancer-asg.html
For the Auto deploy, you can automate within your pipeline to create a new launch configuration or to get the last version of your code PHP from S3 or another location in the user data part. You may use gitlab ci or CodeDeploy which is the perfect candidate for this kind of stuff
Be aware also, that the autoscaling group is statless(create/terminate instances) and you must store your images and assets in a shared location like S3, DB or EFS, because if an instance is unhealthy or terminated by the ASG, you may lose data.

AWS EC2 instances with auto scaling staying in sync

I have a Node.js web application currently running on a single EC2 instance on AWS. I am thinking of using auto scaling with 2 or more EC2 instances since the load on the application is increasing.
I have been trying to understand something with AWS Auto Scaling for a couple hours now but I cant seem to find an answer anywhere.
Currently, at many instances I SSH into my Ubuntu EC2 instance to modify some things or to run a deploy command (which grabs latest code from github). How does this work when you have, let's say 4 instances running under the auto scaling?
So if I SSH into a server and change the server.js file, what happens to the other 3 instances?
If that is not possible what are my choices? I have seen many people seeing that using S3 is the way to keep things in Sync but I don't fully get that. So I have to keep all my source code in S3 and do my edits from there?
You won't be able to modify files directly on the server once they are in an auto-scaling group. Changing something on one server won't be reflected on the other servers, and even if you manually updated all the currently running servers, any servers added by auto-scaling actions will not have those changes.
There are many methods to solve this, for example using AWS Code Deploy.
You could also configure something via an EC2 User-Data script in your auto-scaling configuration which will run on each server when they are created. That script could checkout the latest code from Git, or pull the latest build artifact from S3, and then start the app. When you have an update ready to deploy, you would simply flag the current instances as "unhealthy" and wait for the Auto-Scaling group to automatically replace them with new, updated instances.
You could use AWS EFS to host your application code and all web servers will get content from EFS instead of individual server. This way you don't have to worry about modifying individual server content.
One way you can do it is using github. you can update your code and push it to github and then terminate your existing instances and let the auto-scaling group spin up new instances with the updated code. here is a youtube tutorial video that has detailed steps on how to do it: https://www.youtube.com/watch?v=lB3Ip0Yn-Zs

How can I update a website running in a private subnet?

I have an AWS website that is running inside a private subnet and I am not sure what the best way is to update it.
I would like something that is non-burdensome, ideally it would be nice to have some EC2 Box (with security groups only allowing select IP's to connect too) running the development page, and then I could simply copy it over to the private EC2 Box with a click of a button.
I am not too familiar with best practices, but the idea of connecting through several EC2 boxes seems burdensome.
Thank You!
Sounds like you might want to make use of AWS CodeDeploy. There are other tools as well, but since you are already on/using AWS this might be a good one to start with:
AWS CodeDeploy is a service that automates code deployments to any
instance, including Amazon EC2 instances and instances running
on-premises. AWS CodeDeploy makes it easier for you to rapidly release
new features, helps you avoid downtime during application deployment,
and handles the complexity of updating your applications. You can use
AWS CodeDeploy to automate software deployments, eliminating the need
for error-prone manual operations, and the service scales with your
infrastructure so you can easily deploy to one instance or thousands.
https://aws.amazon.com/codedeploy/

Deploying to EC2 instances behind a load balancer; PHPStorm + GitHub

I know this has been partially answered in a bunch of places, but the answers are so.. all over the map, dated and not well explained. I'm looking the best practice as of February 2016.
The setup:
A PHP-based RESTful application service that lives in an EC2 instance. The EC2 instance uses S3 for uploaded user data (image files), and RDS MySql for its DB (these two points aren't particularly important.)
We develop in PHPStorm, and our source control is GitHub. When we deploy, we just use PHPStorm's built-in SFTP deployment to upload files directly to the EC2 instance (we have one instance for our Staging environment, and another for our Production environment). I deploy to Staging very often. Could be 20 times a day. I just click on a file in PHPStorm and say 'deploy to Staging', which does the SFTP transfer. Or, I might just click on the entire project and click 'deploy to Staging' - certain folders and files are excluded from the upload, which is part of PHPStorm's deployment configuration.
Recently, I put our EC2 instance behind a Load Balancer. I did this so that I can take advantage of Amazon's free SSL offering via the Certificate Manager, which does not support individual EC2 instances.
So, right now, there's a Load Balancer with only a single EC2 instance behind it. I maintain an Elastic IP pointing to the EC2 instance so that I can access it directly (see my current deployment method above).
Question:
I have not yet had the guts to create additional (clone) EC2 instances behind my Load Balancer, because I'm not sure how I should be deploying to them. A few ideas came to mind, but they're all pretty hacky.
Given the scenario above, what is currently the smoothest and best way to A) quickly deploy a codebase to a set of EC2 instances behind a Load Balancer, and B) actually 'clone' my current EC2 instance to create additional instances.
I haven't been able to really paint a clear picture of the above in my head yet, despite the fact that I've gone over a few (highly technical) suggestions.
Thanks!
You need to treat your EC2 instance as 100% dispensable. Meaning, that it can be terminated at any time and you should not care. A replacement EC2 instance would start and take over the work.
There are 3 ways to accomplish this:
Method 1: Each deployment creates a new AMI image.
When you deploy you app, you deploy it to a worker EC2 instance whose sole purpose is for "setup" of your app. Once the new version is deployed, you create a fresh AMI image from the EC2 instance and update your Auto Scaling launch configuration with the new AMI image. The old EC2 instances are terminated and replaced with the new code.
New EC2 instances have the recent code already on them so they're ready to be added to the load balancer.
Method 2: Each deployment is done to off-instance storage (like Amazon S3).
The EC2 instances will download the recent code from Amazon S3 and install it on boot.
So to put the new code in action, you terminate the old instances and new ones are launched to replace them which start using the new code.
This could be done in a rolling-update fashion, or as a blue/green deployment.
Method 3: Similar to method 2, but this time the instances have some smarts and can be signaled to download and install the code.
This way, you don't need to terminate instances: the existing instances are told to update from S3 and they do so on their own.
Some tools that may help include:
Chef
Ansible
CloudFormation
Update:
Methods 2 & 3 both start with a "basic" AMI which is configured to pull the webpage assets from S3. This AMI is not changed from version-to-version of your website.
For example, the AMI can have Apache and PHP already installed and on boot it pulls the .php website assets from S3 and puts them in /var/www/html.
CloudFormation works well for this. In addition, for method 3, you can use cfn-hup to wait for update signals. When signaled, it'll pull updated assets from S3.
Another possibility is using Elastic Beanstalk which could be used to manage all of this for you.
Update:
To have your AMI image pull from Git, try the following:
Setup an EC2 instance with everything installed that you need to have installed for your web app
Install Git and setup a local repo ready to Git pull.
Shutdown and create an AMI of your instance.
When you deploy, you do the following:
Git push to GitHub
Launch a new EC2 instance, based on your AMI image.
As part of the User Data (specified during the EC2 instance launch), specify something like the following:
#!/bin/sh
cd /git/app
git pull
; copy files from repo to web folder
composer install
When done like this, that user data acts as a script which will run on first boot.

Do I need to duplicate code on every EC2 instance running behind an ELB?

Hi this is a very noob question, but I am trying to deply my Node JS API server on AWS.
Everything is working fine with one m1.large instance that my Front End running on S3 connects to.
Now I want to Scale and put my EC2 instance and possibly many more behing and ELB and an Auto Scaling Group.
Do I need to duplicate my server code on every EC2 instance?
If so , I assume I'll have to create a seperate DB server which all of the EC2 instances will connect to.
Am I right,anyone experienced in Amazon AWS can answer this, I tried googling but most of the links point to detailed tutorials which however don't answer my question.
Any help would be much appreciated. Thanks
yep. that's basically correct. the code needs to be on all instances fronted by the load balancer. for the database you may want to look into RDS.
Of course NOT.. But sure you can do..
That's why there are EFS volumes, which are shared volumes to more than one EC2 instance, but you have to choose a region that support them since they are available on certain regions. As a candidate AWS certified architect I would recommend you more than two options.
You can follow your first approach and create an EC2 instance put your code inside and then create an AMI and use this AMI to launch your upcoming EC2s through autoscaling group. In my opinion bad decision since on any code change you have to go on each one and put the new code and then create a new AMI and a new Auto scaling configuration..Lot's of stuff to do, but it will work.
Second approach, following the first approach but do not create an AMI, instead upload your code on a private (I suppose) Repo like github, bitbucket, install SSM and the appropriate roles for managing EC2 and on every code changes push them to repo and pull them on your EC2, using SSM. Of course you may write a webhook to bitbucket to call an api and run the git pull command on each EC2. Probably the last sentence could be a third approach but needs more coding!!!
Last but not least!! Use an EFS volume put your code there, mount this volume on your EC2, add a auto mount command on every boot, alter your apache httpd main document to point on this EFS/folder and create an AMI with this configuration. Voila! every new EC2 will use the same code which located on this shared/network volume. Whenever you need to change something you have to log in on a third instance outside of your autoscaling group for a certain amount of time upload your changes and then turn it off and all of your EC2 will take immediately the new code. Of course you may pull the changes from a repo following the third approach.
Maybe there are more approaches, I'm using the third one with private repos of course and until now I haven't faced any problem (Fingers crossed)!
One other option is to use Elastic Beanstalk to Deploy NodeJs applications. Here is the guide specific to NodeJs. This will take care of most of the stuff which you would need to do otherwise if you only use EC2 For example: ELB, Autoscaling Cloudwatch etc.
For Database, you may want to use the Master Slave with Read Replicas. Another option is to evaluate NoSql Databases like DynamoDB if it fits your use case. The scalability of DynamoDB tables is managed by AWS so you dont need to worry about it.