Rebuilding instance in amazon aws - amazon-web-services

How i can Rebuilding instance in amazon AWS
In digitalocean, it's possible but i don't find like this in AWS
Thanks

The concept of Rebuilding is different in DigitalOcean and AWS. If you want to rebuild your instances in AWS you will need to terminate the instance and launch another one, changing anything you need. But if you want to rebuild an instance with the same configuration with less work, check if AWS CloudFormation fits for you.
https://aws.amazon.com/pt/documentation/cloudformation/

You could create an AMI of your machine. which in turn can be used to launch new instances.
Reference:-
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html

Related

How to automate Packer AMI builds?

I built and provisioned a AMI using packer and amazon-ebs.
I need to rebuild the AMI weekly. Is there a simple solution for this? Do I need a separate ec2 for jenkins or is that overkill? Would any CI tool be good for this or is there more simple approach? My packer ami code is hosted on github.
In addition, I create a new ec2 instance from AMI and tear down old one weekly. Whats the best way to schedule ec2 tear-downs and rebuilds automatically?
So 2 issues:
Weekly rebuild of AMI
Weekly rebuild of ec2 based on rebuilt AMI
Im not experienced with any devops things so please excuse me.
I'm assuming this is the only task where you want to use an automation server. In other case, I will suggest you create a Jenkins or any other automation server. It all depends on your need.
To automate this single task, you don't necessarily need an automation server. The one method I'm going to demonstrate is one among many ways you can do it. Below are the AWS resources you require.
A Docker image where packer, aws cli, and any other dependencies installed.
An ECS task configured using the image in #1.
A CloudWatch schedule expression to trigger the ECS task periodically, in this case weekly.
Your docker image should be configured such that container execution does the rebuild of the AMI. You can write a bash script for this and configure the same as the container entry point.
The second point, rebuild of EC2 server is not a best practice. You should have a separate process in place to apply the AMI changes to respective instances. However, you can do this by scheduling a lambda function which will terminate and launch a new instance.
I know this is a broad answer and there are many other ways to do the same.

Replicate changes made on one EC2 to another EC2 Server

I have two ec2 servers named Ec2-Webserver-1 and EC2-WebServer-2 inside same VPC under two different subnets served by Application Load Balancer.
When I made small changes to the first servers, Then I have to manually change the another server too. Otherwise I have to create an AMI and create a new server from the AMI.
I think, creating AMI each time when I made little changes is not the appropriate one.
Is there any other tools in AWS or third-party tools that can auto replicate the changes made on Server 1 to Server 2? I am currently using CentOS AMI.
I would suggest look into cloudformation. You can define your ec2, what IAM roles you want it to have and a whole lot of other stuff. Once that is done you can just run the cloudformation script and AWS will provision the EC2 with your defined settings automatically. CloudFormation link
You should be looking into Code Deploy https://aws.amazon.com/codedeploy/getting-started/?nc=sn&loc=4 Possibly combine it with Code Pipeline. Here is a starting point for deciding whether you need one or both. https://forums.aws.amazon.com/thread.jspa?threadID=172485

Use Ansible to launch new EC2 instance into cluster

What I want to do is use Ansible to create an ECS cluster, then create an EC2 instance and launch it into that cluster, but I can't seem to find a way to do that. I've had no trouble launching and configuring an EC2 instance on its own so far, but it's this next step that's totally blocking me.
The AWS documentation says I can create an EC2 instance with User Data to assign it to a cluster, but this doesn't seem to work when I use the user_data field of Ansible's ec2 module. This is what I have in that field:
#!/bin/bash
echo "ECS_CLUSTER=my-test-cluster" >> /etc/ecs/ecs.config
I feel like there must just be something I'm not seeing, or else some basic understanding I'm missing. I'm hoping someone can provide some pointers here.
Edit: I wasn't originally using the right ECS-optimized AMI, but even after starting an instance with the correct image I don't see a difference.
I think what you are missing is the proper policy on the instance to associate itself with the cluster. It sounds like you have the rest of it setup fine. I would safely assume that if you logged into the server and checked the ECS logs from the agent that you would see permission issues.
Take a look here http://docs.aws.amazon.com/AmazonECS/latest/developerguide/instance_IAM_role.html I had a similar issue myself before and setting the proper permissions fixed it.
Another possibility is that your instance doesn't have Internet access:
Container instances need external network access to communicate with the Amazon ECS service endpoint, so if your container instances do not have public IP addresses, then they must use network address translation (NAT) to provide this access.

Do I need to duplicate code on every EC2 instance running behind an ELB?

Hi this is a very noob question, but I am trying to deply my Node JS API server on AWS.
Everything is working fine with one m1.large instance that my Front End running on S3 connects to.
Now I want to Scale and put my EC2 instance and possibly many more behing and ELB and an Auto Scaling Group.
Do I need to duplicate my server code on every EC2 instance?
If so , I assume I'll have to create a seperate DB server which all of the EC2 instances will connect to.
Am I right,anyone experienced in Amazon AWS can answer this, I tried googling but most of the links point to detailed tutorials which however don't answer my question.
Any help would be much appreciated. Thanks
yep. that's basically correct. the code needs to be on all instances fronted by the load balancer. for the database you may want to look into RDS.
Of course NOT.. But sure you can do..
That's why there are EFS volumes, which are shared volumes to more than one EC2 instance, but you have to choose a region that support them since they are available on certain regions. As a candidate AWS certified architect I would recommend you more than two options.
You can follow your first approach and create an EC2 instance put your code inside and then create an AMI and use this AMI to launch your upcoming EC2s through autoscaling group. In my opinion bad decision since on any code change you have to go on each one and put the new code and then create a new AMI and a new Auto scaling configuration..Lot's of stuff to do, but it will work.
Second approach, following the first approach but do not create an AMI, instead upload your code on a private (I suppose) Repo like github, bitbucket, install SSM and the appropriate roles for managing EC2 and on every code changes push them to repo and pull them on your EC2, using SSM. Of course you may write a webhook to bitbucket to call an api and run the git pull command on each EC2. Probably the last sentence could be a third approach but needs more coding!!!
Last but not least!! Use an EFS volume put your code there, mount this volume on your EC2, add a auto mount command on every boot, alter your apache httpd main document to point on this EFS/folder and create an AMI with this configuration. Voila! every new EC2 will use the same code which located on this shared/network volume. Whenever you need to change something you have to log in on a third instance outside of your autoscaling group for a certain amount of time upload your changes and then turn it off and all of your EC2 will take immediately the new code. Of course you may pull the changes from a repo following the third approach.
Maybe there are more approaches, I'm using the third one with private repos of course and until now I haven't faced any problem (Fingers crossed)!
One other option is to use Elastic Beanstalk to Deploy NodeJs applications. Here is the guide specific to NodeJs. This will take care of most of the stuff which you would need to do otherwise if you only use EC2 For example: ELB, Autoscaling Cloudwatch etc.
For Database, you may want to use the Master Slave with Read Replicas. Another option is to evaluate NoSql Databases like DynamoDB if it fits your use case. The scalability of DynamoDB tables is managed by AWS so you dont need to worry about it.

Deploy to Amazon EC2 from Cloudbees

Is it possible to set up Cloudbees to deploy to an Amazon EC2 instance after a successful build?
Thanks,
W
Yes, but you don't provide many details of what you mean by "deploy". I suppose you mean using scp? If so, you must copy your Jenkins public key to an authorized keys file, and make sure that your security group rules allow CloudBees' build machines to talk to your EC2 instance.