I am looking to implement Autoscaling for my website using 'AWS Autoscaling Target Groups and Load Balancer'.
As the first step, I have created an Amazon Machine Image (AMI) based on the current EC2 instance, and I am using it as a scale base.
My question is when the Autoscaling kicks in and duplicate the Instances, it seems that it uses the AMI created and I wonder how can we manage the OS updates like Security updates to the Kernel e.g. yum updates, apt-get updates etc.
In this type of Scenario what the most simplest way to manage the OS updates in an Autoscaling environment?
When you create a Launch Template, you can specify an User Data script in the Advanced Details section. This script will run when the new EC2 instance is starting up.
This script can contain OS updates and other environment setup. You can read further about User Data scripts in the AWS docs.
Related
Is there any way to install exe/MSI agents in AWS EC2 instances in an automated way?? In specific, I am looking for a counterpart of Azure's Custom Script Extension. [Free of cost]
Scenario:
I want to install BigFix and Datadog agents on 1000 Ec2 instances, this is a one time job, so I am not looking for any solution that involves Chef / Puppet, etc.,
Yes, you can pass a script to the instance that will be executed on the first boot (but not thereafter). It is often referred to as a User Data script.
See:
Running Commands on Your Windows Instance at Launch - Amazon Elastic Compute Cloud
Running Commands on Your Linux Instance at Launch - Amazon Elastic Compute Cloud
If you wish to install after the instance has started, use the AWS Systems Manager Run Command.
In AWS cloud formation, i use the cloud former tool. I can use that tool to create a cloud formation template from existing resources. And then use the template to create a stack. I tested with that tool. It can work, (as in it can create instances with same memory size, with same volume size, same VPC settings, and auto start the instances). But there is no files in the volume.
Do i have to create a snapshot of the existing volume, create a new volume from the snapshot, attach it to the newly created instance, and copy the files manually ?
Or is there any better way ?
Do i have to create a snapshot of the existing volume, create a new volume from the snapshot, attach it to the newly created instance, and copy the files manually ?
Cloudformation is provisioning resources, but is not responsible for provisioning the contents of those resources - that you have to do yourself.
You can leverage the EC2 Userdata to manually pull files from S3 or other repos as the instance boots.
Or is there any better way ?
If you want to share data between applications, EFS is always an option. In your case, though, using Userdata might be effective.
If you wish to launch new EC2 instances with software automatically loaded, there are basically two choices:
Use a pre-configured AMI, or
Use a startup script to load the software
Pre-configured AMI
An Amazon Machine Image (AMI) is a copy of a disk. When a new EC2 instance is launched, an AMI is selected and the boot disk (and optionally other disks) are automatically pre-loaded with the contents of the AMI.
A common practice is to boot an EC2 instance and configure it as desired. Then, create an AMI. Thereafter, when a new EC2 instance is required for the application, launch it using the pre-built AMI.
There are also tools available to automate the building of an AMI, such as Netflix Aminator and Packer.
Benefits: New machine boots quickly, fully-configured.
Issues: Need to create a new AMI whenever you update your software.
Use a startup script to load the software
When an Amazon EC2 instance is launched from a standard Amazon-provided AMI (eg Amazon Linux, Microsoft Windows), software on the AMI automatically looks at the User Data passed to an EC2 instance. If the User Data contains a startup script, the script will be executed -- but only the first time that an instance is launched. This is an excellent way to install software on the instance.
You are responsible for writing the script. The script should install whatever tools, software and data you want on the instance.
Benefits: Updating your software? Just launch a new instance and the script will install the latest version of your software (assuming you have written the script to always point to the latest version).
Issues: It takes longer to launch the new instance, since the software is being installed.
I would like to create a Managed Compute Environment for AWS Batch, but use EC2 User Data to configure the instances as they are brought into the ECS fleet that Batch is scheduling jobs onto.
It shouldn't matter, but the purpose of the User Data script is to pull down large data files onto an InstanceStore that the Docker containers will reference.
This is possible in ECS, but I have found no way to pass User Data to a Managed Batch Compute Environment.
At most, I can specify the AMI. But since we're going with Managed, we must use the Amazon ECS-optimized AMI.
I'd prefer to use EC2 User Data as the solution, as it gives a entry-point for any other bootstrapping we wish to perform. But I'm open to other hacks or solutions, so long as they are applicable to a Managed Compute Environment.
You can create an AMI based on the AWS provided AMI, and customize it. It will still be managed since the Batch and/or ECS daemon is running on it.
As a side note I’m trying to do the same thing but no luck so far. I may end up creating a custom AMI and include the configure script in the AMI itself in /etc/rc.local. Not ideal but I don’t think Batch can pass a user data script other than what it needs. I am still looking into this.
You can create a launch template containing your user-data. Then assign this launch template to your compute environment. Keep in mind that you might have to clean the cloud init directory in your AMI since it probably was already spun up once (at ami creation).
Launch template userguide
I am able to monitor a Windows instance's memory usage using custom metrics in CloudWatch.
I have followed the following blog to achieve that :
http://blog.krishnachaitanya.ch/2016/03/monitor-ec2-memory-usage-using-aws.html
Using that, I am able to monitor only one instance. I am now doing the process in every instance launched.
Can I do it at once for all instances instead of changing .json file and enabling cloud watch integration in every instance?
If the instances are already launched, you have to do it for each instance. Else you can take an AMI of the first instance, then launch other instances from that AMI and you do not have to do it for each instance.
If you have to do it manually, consider something like Ansible to do it for you. There is a bit of learning but not difficult.
BTW, adding custom metrics is straightforward for Linux instances. Monitoring Memory and Disk Metrics for Amazon EC2 Linux Instances
For Windows instance: Sending Performance Counters to CloudWatch and Logs to CloudWatch Logs Using Amazon EC2 Simple Systems Manager
If your instances have the appropriate instance profile and are running the SSM agent (which they probably are if you launched from an Amazon provided AMI), you can use SSM run command to run arbitrary powershell against an instance or a set of instances (using tags). There is even a Amazon managed SSM document called AWS-ConfigureCloudWatch that is built specifically for this use case.
See http://docs.aws.amazon.com/systems-manager/latest/userguide/run-command.html
What I need to do is: When a EC2 instance is launched, the lambda function or other installs the script to monitor memory and disk usage in the host.
I'm thinking in how I can do that.. Anyone can give me a idea?
You don't need a lambda. Pass your install script as user data.
See: Running Commands on Your Linux Instance at Launch
It appears that your requirement is to monitor Memory and Disk usage from an Amazon EC2 instance. I will assume that you want to monitor it via Amazon CloudWatch.
Amazon CloudWatch provides default metrics for EC2 instances including CPU utilization, network traffic and disk access. These metrics are visible from the hypervisor. However, CloudWatch cannot see 'inside' the EC2 instance, so it is necessary to run scripts from within the instance to track things like free memory and free disk space. The scripts talk to the operating system to retrieve these metrics, which is why they have to run 'within' the instance.
Some standard monitoring scripts are available for Linux instances: Monitoring Memory and Disk Metrics for Amazon EC2 Linux Instances
You can, of course, write your own scripts to send custom metrics to CloudWatch. Once installed, the scripts will run automatically when the instance is restarted.
If you wish to install these scripts (or your own scripts) on new EC2 instances, there are a couple of methods:
Install the scripts on one instance, then create an Amazon Machine Image (AMI) of that instance that contains a copy of the disk. You can then launch new instances using that AMI, and the scripts will already be installed on the new instances.
Launch the instance(s) with a User Data script to install the monitoring script. Any script passed through User Data will automatically be run the first time that the instance is started.
When you are using a scaling group you must specify a LaunchConfig.
Part of the LaunchConfig is the user-data script which is executed when the instance boots.
This can be also easily done from CloudFormation scripts if that is what you use to create the new EC2 VM.
You can find here samples of scripts.
enter link description here