I have requirement of creating multiple instances of the same EC2 image from lambda as the EC2 image has some Windows Processing creating PDF files. Can I launch multiple instances of the same EC2 image and pass some parameters to each ec2 instance ( say name of the bucket in S3, and names are different).
Thanks in advance.
An AWS EC2 image provides essentially a snapshot of how the server should look.
This would include:
Any packages you need installed
Any configuration you need
If you want custom configuration applied on top you would need to either:
Make use of UserData when you launch the instance to run those additional actions
Create a custom AMI with the custom configuration included
Related
Problem:
I have an EC2 instance running and I have made some modifications to the instance: installed docker, setup directories for certs, etc. Now, I am wanting to create the same instance but use infrastructure as code principals. Instead of remembering all the additions that I have done and creating a template by hand, I am trying to find a way to export my current EC2 instance into a json or yaml format so that I can terminate this instance and create another one that is equivalent to the one running.
I have tried:
aws ec2 describe-instances
Reading through the AWS CLI EC2 docs
Reading through the CloudFormation docs
Searched Google
Searched SO
Since you have no knowledge of how the instance was setup, the only choice is to create an Amazon Machine Image (AMI). This will create an exact copy of the disk, so everything you have installed will be available to any new instances launched from the AMI. The CloudFormation template can then be configured to launch instances using this AMI.
If, on the other hand, you knew all the commands that needed to be run to configure the instance, then you could provide a User Data script that would run when new instances first boot. This would configure the instances automatically and is the recommended way to configure instances because it is easy to modify and allows instances to launch with the latest version of the Operating System.
Such a script can be provided as part of a CloudFormation template.
See: Running commands on your Linux instance at launch - Amazon EC2
One option would be to create AMI from live instance and spin up new CF stack using the AMI.
Other would be importing resource: https://aws.amazon.com/blogs/aws/new-import-existing-resources-into-a-cloudformation-stack/
There is a tool (still in beta) developed by AWS called CloudFormer:
CloudFormer is a template creation beta tool that creates an AWS CloudFormation template from existing AWS resources in your account. You select any supported AWS resources that are running in your account, and CloudFormer creates a template in an Amazon S3 bucket.
The CloudFormer is an AWS managed template. Once you launch it, the template will create an AWS::EC2::Instance for you along with a number of other related resources. You will access the instance using URL through browser, and an AWS wizard will guide you from there.
Its tutorial even shows how to create a CloudFormation template from an existing EC2 instance.
Import the EC2 instance into CloudFormation then copy it’s template.
Read more: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/resource-import.html
So I have created a terraform script that does the following:
Spins up an EC2 instance
Copies over some files
Run some remote commands to install stuff from repos
Creates an elasticsearch service domain
Now I need to configure the ec2 instance with the endpoint I get back from elasticsearch domain, so the application are using the right endpoint (currently it has some default value).
How can I pass the endpoint value into the file and then copy it over to the ec2 instance. What would be the recommended approach?
Thanks in advance.
Terraform will derive the order automatically, when you refer to the output of a certain resource. You can use the file provisioner to create files on the EC2 instance.
If the EC2 instances don't need to be created prior to the Elasticsearch domain, you can use the template provider and render the file based on the values of the Elasticsearch resource, render and copy up to the EC2 instances.
Right now we have one instance. How do we create another instance with the content and files as the first server? Do we just create an instance?
Also if we make a change to a file on server one, do we have to make the same changes on server two? thanks
The best way to achieve your use case is.
Install AWS CLI on your instance.
Create a S3 bucket and add all your application files to that S3 bucket.
Add a Cron Job on your instance that will run a S3 sync command some thing like this
aws s3 sync s3://mybucket /<path to your application root>
Now take a AMI of your instance.
Attach your instance to load balancer, if you want to add another instance create another instance from the same AMI.
And any file change you want to apply apply it in the S3 bucket so what will happen is no matter how many instance you add to your load balancer they all will be synced with the S3 bucket, so if you change a file add a new file to S3 bucket that file will be changed and added to all the instances that are running behind the load balancer and are in sync with the S3 bucket.
Suppose you have an application which you need to load balance in VM1 then you would require to follow given step
1.Take the snapshot of the VM1 and also of EBS if one is attached
2.Now create VM2 from this snapshot (this ensure you have same to same content of VM2 just different MAC and IP configuration rest data remains same)
3. Add VM1 and VM2 to load balancer which ever application you would like to load balance
4.If want any changes made to the VM1 data be reflected in VM2 as well without requiring a need to do it manually use rsync(remote sync) utility which takes directory and machine name/ip as input you wish to keep in sync between machine(changes to directory (provided to rsync) made in one machine are automatically updated to other machine)
Best thing to do will be:
create an AMI will all the necessary configuration and software
installed. Always try to use a golden AMI where possible. (explore something like packer.io)
If you cant use a golden AMI use a custom script as part of the user data when launching the EC2 to complete the configuration
Create an Auto Scaling Group using the baked AMI
In the console under Auto Scaling choose Auto Scaling Groups
On the Details tab, choose Edit
For Load Balancers, select your load balancer and save.
This way just by changing the number of instances in the auto scaling group will add (using the baked AMI) or remove instances. Better still, adding thresholds to increase or decrease instances automatically can be achieved. As the entire auto scaling group is associated with the ELB any new instances will be automatically configured with the ELB.
Note: Your ELB and the ASG should be in the same region
Please check the amazon docs link: Attaching a Load Balancer to Your Auto Scaling Group
If I create a custom AMI for an EBS backed EC2 instance after installing numerous applications and making lot of config changes to the EC2 instance like IP Tables, httpd.conf file etc...
Will the custom AMI image capture all those config changes and/or installed applications so that I can use it to launch exact functioning copy of the Custom AMI originating EC2 Instance?
Anything done after launching an EC2 instance will be independent of what the original AMI had. There isn't a relationship among the instances which use the same AMI as well; except that they all were materialised from a single AMI - the individual / independent changes in the Instances ( AMI ) would be in silos.
Coming back to your point; after making numerous changes; you would need to create an image AMI out of the running instance where the changes have been made. Going forward you can use the AMI to create new instances. Already created instances wouldn't reflect any new changes.
This is where the tools like Ansible, Chef, Puppet come into picture.
Recently My Website shifted on Amazon.
I create EC2 Instance. Install lamp and Setup CodeIgnitor in /var/www/http folder.
the structure
Codeignitor folder I have folder name 'UPLOAD'.this folder is used for uploaded images and files.
I make AMI image from EC2 Instance.
I have setup Auto scaling of Ec2 instances.
When my old ec2 instance is failed then automatically new instance is created. But My all data from "UPLOAD" of folder on old ec2 instance has lost.
I want to separate "UPLOAD" folder in codeignitor from ec2 instance.
So whenever new instance is create it will get UPLOAD folder and its contents without loss.
I want to separate this upload folder. so when new instance is create then it will get this data.
how to do this.
Thanks for Advance.
Note . I have used MYSQL on Amazon RDS.
You can use a shared Elastic Block Storage mounted directory.
If you manually configure your stack using the AWS Console, go to the EC2 Service in the console, then go to Elastic Block Storage -> Volumes -> Create Volume. And in your launch configuration you can bind to this storage device.
If you are using the command line tool as-create-launch-config to create your launch config, you need the argument
--block-device-mapping "key1=value1,key2=value2..."
If you are using Cloudformation to provision your stack, refer to this template for guidance.
This assumes Codeignitor can be configured to state where its UPLOAD directory is.
As said by Mike, you can use EBS, but you can also use Amazon Simple Storage Service (S3) to store your images.
This way, whenever an instance starts, it can access all the previously uploaded images from S3. Of course, this means that you must change your code for the upload, to use the AWS API and not the filesystem to store your images to S3.