I have cloud formation template that creates two EBS volumes and I am attaching those volumes to instance using aws ec2 attach-volumes from user data, also I have auto scaling group setup, so when I update stack with different instance type and it launches new instance, volumes are not attached.
I checked logs and it says volumes are not available, I know why because the terminated instance is using those volumes when ASG launches new one, is there any way that I could reuse those volumes.
Your problem is that the EBS volumes are attached to a different EC2 instance when you want to attach them.
One solution is to write a program (e.g. Python) that monitors the EBS volumes. The program is launched in UserData. Once the volumes become available the program attaches them and exits.
when i add ASG update policy with min instance in service = 0 and min=1, desired=1 and max=1 it is working because ASG terminates old instance before it launches new instance when you have min instances in service=0
Related
In autoscaling group, I have an EC2 instance (with two EBS storage) which could terminate due to any fault and a new EC2 instance is spun in it's place inside the autoscaling group.
My question is how the two EBS storage attached to old EC2 instance be attached to new EC2 instance.
if this is manual process, could reference to terraform be provided.
#rukan I guess you need to make 'DeleteOnTermination' value as false, as you would need EBS of old instances for newly created one.
Reference :
https://docs.aws.amazon.com/autoscaling/ec2/APIReference/API_Ebs.html
https://francescoboffa.com/aws-stateful-service-ebs/
Update :
I tried to do some research on your requirement and I can conclude my answer that there is no standard method where we can re-use EBS volume in AWS auto-scaling group. Moreover It is not recommended one, as autoscaling groups start multiple EC2 instances. Each EBS volume can only be attached to a single EC2 instance. For now, I can suggest to use AWS EFS instead of EBS.
But, if this is must do requirement with EBS, then we need think of some complex logic where writing some kind of startup script which associated the volume with the EC2 instance then mounted it. You can refer this answer
Reference :
https://serverfault.com/questions/831974/can-i-re-use-an-ebs-volume-with-aws-asg
I require a fair bit of RAM and disk for a Docker container that will run infrequently as a task on ECS. My workflow:
Start EC2 instance for ECS
Run task on ECS
Terminate EC2 instance
I terminate the instance between runs because these resources are relatively expensive, and I don't want them running when not in use. Fargate is not appropriate due to its resource limitations, so I'm running ECS on EC2.
I cannot get more than 30GB disk for the image without a lot of human intervention. I can attach arbitrary EBS data volumes (/dev/xvdcz), but AWS still always creates a 30GB root volume /dev/xvda which is used for the container itself.
How do I use a larger than 30GB volume for the Docker container itself?
What I've tried:
Creating an Auto Scaling Group with a launch configuration where the root volume is larger. This does create an instance with a larger root volume, but there is no way to attach this group to a cluster, or link its created EC2 instance with the cluster. Cluster creation seems to be tied to an auto scaling group and instance.
Using an instance with a large dedicated SSD rather than an EBS volume, again the 30GB partition is created for the container.
Mounting dev/xvdcz to the container. This does add the space, but requires me to rewrite my code to only use this folder.
Using the AWS ECS cli to modify disk after creation. This is described in a similar issue. However, as my EC2 instance terminates after task completion, the ID does not persist between runs, and aws ecs describe-clusters does not specify the underlying EC2 instance, so this cannot be automated. A human needs to boot up the instance, look at the ID, and then the volume size can be modified via the CLI.
This issue was brought up on Github back in 2016 but marked as not important and closed, the discussion there is not very helpful.
Under EC2 > Auto Scaling, create a new Auto Scaling Group with a launch config that has your chosen root volume size. This will boot up an EC2 instance by default, leave it for now.
Go to your cluster's auto-created Auto Scaling Group, note the name down (need it later), then click Launch Template > Edit. If it says 'launch configuration,' press 'switch to launch template.'
Select your previously created Auto Scaling Group's template (note if you created multiple versions, select the latest, version 1 is selected by default). Select 'adhere to launch template' and click 'update.'
Delete the Auto Scaling group you first created. This will shut down its related EC2 instance.
Reboot your cluster's ec2 instance:
aws autoscaling set-desired-capacity --auto-scaling-group-name NAME_OF_GROUP --desired-capacity 0
Wait for it to shut down, you can see instance state in the EC2 console.
aws autoscaling set-desired-capacity --auto-scaling-group-name NAME_OF_GROUP --desired-capacity 1
Again wait for it to boot up. Once booted, and for any subsequent boots, /dev/xvda will be the size you specified.
Scenario
I currently have an EC2 instance with a root EBS volume attached to it of 30gb and i have some files stored in that EBS
If i delete the EC2 instance and have delete on termination false then EBS persists.
Desired outcome
I want to provision a new EC2 (provisioned by auto scaling group) instance such that it uses the old EBS volume as its root volume which was detached as a result of me terminating the old instance
Note
I want to have the liberty of choosing OS of newly provisioned EC2 so creating an AMI does not work
You cannot directly launch a new Amazon EC2 instance with an existing Amazon EBS volume. Instead, you would need to:
Launch a new Amazon EC2 instance with a new root volume
Stop the instance
Detach the root volume
Attach the 'old' EBS volume
Start the instance
Storing data in root EBS volume might be a bad idea to start with.
Consider one of the following:
Mount another EBS volume to the instance to store required data only.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-using-volumes.html
best performance, highest cost/effort, but your application doesn't
change a bit.
Create EFS and mount it to your instances. https://docs.aws.amazon.com/efs/latest/ug/mounting-fs.html -
reasonable effort, minimal if any changes to the application.
Store data in S3. Ideal from price standpoint; requires changes to the application.
I have an auto-scaling group with 2 instances.
Every time an instance is launched an EBS volume is attached to it. When it is replaced/terminated is the EBS volume deleted ?
I want to keep a tight budget on my account and I dont want to have volumes lingering and pay for them.
You can configure it both ways. If you are using the web interface to configure the launch configuration when you get to Storage you will see this:
You just need to check the "Delete on Termination" checkbox on the right, whenever an instance is terminated the EBS volume associated will be deleted as well.
I have several AMIs that developers use to spin up instances. These AMI do not have "delete on termination" set on all their EBS volumes. At times terminating instances launched using these AMIs has the unintended consequence of leaving behind orphan EBS volumes. Unfortunately, "blessing" a new AMI for general use is quite an ordeal. Is it possible to edit an existing AMI to turn on "delete on termination" or is the only way forward to create a new AMI with the proper settings?
It is not possible to modify the "Delete on termination" value on an existing AMI.
So you have 2 choices:
Launch an EC2 instance from your AMI and produce a new AMI with the appropriate "Delete on Terminate" value, or
Modify the value when you launch the new EC2 instance.
Once the instance is running you can call modifyAttribute (modify-instance-attribute in the CLI) on the attribute blockDeviceMapping.
aws ec2 modify-instance-attribute --instance-id i-a3ef245 --block-device-mappings "[{\"DeviceName\": \"/dev/sda\",\"Ebs\":{\"DeleteOnTermination\":false}}]"
You can see an example here: http://www.petewilcock.com/how-to-modify-deletion-on-termination-flag-for-ebs-volume-on-running-ec2-instance/
There is no such features.
In addition, I think you misundestand the purpose of AWS web console EC2 EBS Volumes vs snapshot.
When you launch an instances, a EBS Volume is assign to the instance(if it is a EBS base instance like t2., c3.) , once you terminate it, that assoicated volume will be deleted.
Unless you create a EBS volumes that attach to a instances, that is another story. An attached volumes will stay even the instance it attached to is deleted, this is intended design as EBS volumes is network storage anyway, it should allow you to detach/attach to different instances dynamically.
On the other hand, your user may create snapshot(s) for their instances, which store under the Snapshot portion. This is part that will stay, even you terminate the original instance. Once you deleted the original instance, the volume it point do will be "orphaned".
It is a good practice to create a snapshot for instance as backup, but it will go wilds if you don't have a standard policy to handle it. No automation can help a process nature issues.
You MUST enforce a policy and standard for your developer to follow as well, e.g. backup cycle, tag for snapshot, etc.