Mount EBS volume to a running AWS instance with a script - amazon-web-services

I'd like to dynamically mount and unmount an EBS volumes to a running AWS instance using a script and was wondering whether this is achievable, both on linux and windows instances, and if so, what is the expected duration of such operation.

Using AWS CLI and Bourne shell script.
attach-volume
Attaches an EBS volume to a running or stopped instance and exposes it
to the instance with the specified device name.
aws ec2 attach-volume --volume-id vol-1234567890abcdef0 --instance-id i-01474ef662b89480 --device /dev/sdf
detach-volume
Detaches an EBS volume from an instance. Make sure to unmount any file
systems on the device within your operating system before detaching
the volume.
aws ec2 detach-volume --volume-id vol-1234567890abcdef0
--------------------------------------------------------------------------
Use Python and Boto3 which has APIs to attach and detach volumes.
attach_volume
Attaches an EBS volume to a running or stopped instance and exposes it
to the instance with the specified device name.
import boto3
client = boto3.client('ec2')
response = client.attach_volume(
DryRun=True|False,
VolumeId='string',
InstanceId='string',
Device='string'
)
detach_volume
Detaches an EBS volume from an instance. Make sure to unmount any file
systems on the device within your operating system before detaching
the volume.
response = client.detach_volume(
DryRun=True|False,
VolumeId='string',
InstanceId='string',
Device='string',
Force=True|False
)

Related

How can I attach a persistent EBS volume to an EC2 Linux launch template that is used in an autoscaling group?

To Clarify my Autoscaling group removes all instances and their root EBS volumes during inactive hours, then once inside active hours recreates and installs all necessary base programs. However I have a smaller EBS volume that is persistent and holds code and data I do not want getting wiped out during down times. I am currently manually attaching via the console and mounting every time I am working inside active hours using the commands below.
sudo mkdir userVolume
sudo mount /dev/xvdf userVolume
How can I automatically attach and mount this volume to a folder? This is all for the sake of minimizing cost and uptime to when I can actually be working on it.
Use this code:
#!/bin/bash
OUTPUT=$(curl http://169.254.169.254/latest/meta-data/instance-id)
aws ec2 attach-volume --volume-id vol-xxxxxxxxxxxx --device /dev/xvdf --instance-id $OUTPUT --region ap-southeast-1
Set your volume ID and region.
Refer this link for further details: https://aws.amazon.com/premiumsupport/knowledge-center/ec2-linux-spot-instance-attach-ebs-volume/

Attaching an EBS volume to AWS Batch Compute Environments

I want to set up AWS Batch running few python scripts to do some batch operations on file fetched from S3 and post processing they need to be saved to a volume.
For this I want to configure compute environments in AWS batch.
I wish to use spot instances but i need my EBS volume to be there even after instance is terminated and if new instance is spin up it has to mount same volume as used before.
Create a instance-template, provide a bootstrap script, for the mentioned case something like:
sudo mkdir -p /<any directory name where volume will be mounted eg: dir>
aws ec2 attach-volume --volume-id <volume_id> --instance-id $(wget -q -O - http://169.254.169.254/latest/meta-data/instance-id) --device /dev/sdf
sudo mount /dev/sdf /<above mentioned dir rg :dir>
in AWS batch definition, use the above template to launch your ec2 machine.

EBS volume attached on running instance lambda logic required

I have written a Lambda function which can be trigged only when an EBS volume is attached to running instance and the status is completed. If this condition is true, then the below Lambda function will trigger and mounting the two EBS devices on my running OS.
Is this possible with a Lambda function or not? Please help me with the below code. My two EBS volumes are created and attached with a CloudFormation template.
from __future__ import print_function
import boto3
import os
ec2 = boto3.client('ec2')
def lambda_handler(event, context):
volume =(event['resources'][0].split('/')[1])
if event['detail']['result'] == 'completed':
attach=ec2.describe_volumes(VolumeIds=[volume])['Volumes'][0]['Attachments']
if attach:
instance = attach[0]['InstanceId']
if instance:
os.system('mkfs -t ext4 /dev/xvdg')
os.system('mkfs -t ext4 /dev/xvdf')
os.system('mkdir -p /var/www/production /var/lib/SQL')
os.system('mount /dev/xvdf /var/www/production')
os.system('mount /dev/xvdg /var/lib/SQL')
os.system('echo dev/xvdf /var/www/production ext4 defaults,nofail 0 2 >> /etc/fstab')
os.system('echo /dev/xvdg /var/lib/SQL ext4 defaults,nofail 0 2 >> /etc/fstab')
else:
raise Exception('Volume ' + volume + ' not currently attached to an instance')
Rohan,
Without some magic, Lambda cannot execute commands on EC2 instances.
I am assuming that you are creating new EC2 instances using CloudFormation and attaching the EBS volumes at that time.
You can put your OS level commands into the UserData under Resources in CloudFormation.
This link has examples:
Deploying Applications on Amazon EC2 with AWS CloudFormation

Terminate an AWS EC2 instance without leaving a volume behind

I started an instance based on my AMI (based on Ubuntu 12.04 server) with the following command.
aws ec2 run-instances --image-id MY_AMI_ID --count 1 --instance-type t1.micro
What's surprising is, after I terminated the instance using the following command, it left an volume.
aws ec2 terminate-instances --instance-id MY_INSTANCE_ID
I would like to have the volume destroyed automatically, not sure if there is an easy option in the command line to do it.
Have you attached the volume after launching the instance?
As Amazon EC2 deletes all volumes that were attached during instance launch. Only volumes attached after instance is launched, will not be deleted.
Your AMI probably has the option set to not terminate block devices. You can adjust this behavior in your AMI by using the "delete-on-termination" option in AWS Console or the AWS CLI ec2-register command:
http://docs.amazonwebservices.com/AWSEC2/latest/CommandLineReference/ApiReference-cmd-RegisterImage.html
Found that
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/terminating-instances.html has an example
aws ec2 modify-instance-attribute --instance-id i-63893fed --block-device-mappings "[{\"DeviceName\": \"/dev/sda1\",\"Ebs\":{\"DeleteOnTermination\":true}}]"
That solves my problem: now after an instance is terminated, it will not leave a volume behind.

attaching a previous EBS Volume to a new EC2 Linux Instance

I ran into a problem the other day while cloning a github repo, and all of a sudden my EC2 Instance (EC2 A) became completely unusable. My question is: how can I re-attach an EBS Volume from an EC2 Instance that I terminated to a new EC2 Instance that I just created?
Step-by-Step of the problem:
0) broke my first EC2 Instance (EC2 A).
1) created a snapshot of the EBS Volume (EBS Volume A) attached to EC2 A.
2) stopped EC2 A.
3) detached EBS Volume A.
4) terminated EC2 A.
Then...
5) created a brand new EC2 Instance (EC2 B) with a new EBS Volume automatically created (EBS Volume B), which is currently attached to EC2 B.
6) set it all up (apache, mysql, php, other plugins, etc...)
7) Now I want to access my data from EBS Volume A. I do not care about anything in EBS Volume B. Please Advise...
Thank you so much for your time!
Yes, you can attach an existing EBS volume to an EC2 instance. There are a number of ways to do this depending on your tools of preference. I prefer the command line tools, so I tend to do something like:
ec2-attach-volume --instance-id i-XXXXXXXX /dev/sdh --device vol-VVVVVVVV
You could also do this in the AWS console:
https://console.aws.amazon.com/ec2/home?#s=Volumes
Right click on the volume, then select [Attach Volume]. Select the instance and enter the device (e.g., /dev/sdh).
After you have attached the volume to the instance, you will want to ssh to the instance and mount the volume with a command like:
sudo mkdir -m000 /vol2
sudo mount /dev/xvdh /vol2
You can then access your old data and configuration under /vol2
Note: The EBS volume and the EC2 instance must be in the same region and in the same availability zone to make the attachment.