EBS volume attached on running instance lambda logic required - amazon-web-services

I have written a Lambda function which can be trigged only when an EBS volume is attached to running instance and the status is completed. If this condition is true, then the below Lambda function will trigger and mounting the two EBS devices on my running OS.
Is this possible with a Lambda function or not? Please help me with the below code. My two EBS volumes are created and attached with a CloudFormation template.
from __future__ import print_function
import boto3
import os
ec2 = boto3.client('ec2')
def lambda_handler(event, context):
volume =(event['resources'][0].split('/')[1])
if event['detail']['result'] == 'completed':
attach=ec2.describe_volumes(VolumeIds=[volume])['Volumes'][0]['Attachments']
if attach:
instance = attach[0]['InstanceId']
if instance:
os.system('mkfs -t ext4 /dev/xvdg')
os.system('mkfs -t ext4 /dev/xvdf')
os.system('mkdir -p /var/www/production /var/lib/SQL')
os.system('mount /dev/xvdf /var/www/production')
os.system('mount /dev/xvdg /var/lib/SQL')
os.system('echo dev/xvdf /var/www/production ext4 defaults,nofail 0 2 >> /etc/fstab')
os.system('echo /dev/xvdg /var/lib/SQL ext4 defaults,nofail 0 2 >> /etc/fstab')
else:
raise Exception('Volume ' + volume + ' not currently attached to an instance')

Rohan,
Without some magic, Lambda cannot execute commands on EC2 instances.
I am assuming that you are creating new EC2 instances using CloudFormation and attaching the EBS volumes at that time.
You can put your OS level commands into the UserData under Resources in CloudFormation.
This link has examples:
Deploying Applications on Amazon EC2 with AWS CloudFormation

Related

Callbacks for AWS EBS Backups or Snapshots

I want to make a backup of several EBS volumes attached to a EC2. There are Snapshots and AWS Backups available.
Issue is, before I can do a backup, I must execute a special script to freeze my application.
Why: it's somewhat im-memory, so this call forces all writes to disk to comlpete and also prevents new disk writes for a duration of a backup.
Is there a way to execute an arbitrary bash script before the backup job/snapshot?
Unfortunately, there's no native --pre-snapshot-script option for creating an EBS snapshot.
However, you could use a Lambda function, that is triggered based on a scheduled EventBridge rule, to trigger a script to be run before you then programmatically take the EBS snapshot. You'd need the SSM agent to be installed on your EC2 instance(s).
The idea is to use ssm:SendCommand and the AWS-RunShellScript managed document to run Linux shell commands on your EC2 instance(s).
You have a couple of options, specifically:
if you want to inline all of your 'special script' in your Lambda function, or download it from S3 during your user data script, or manually transfer it or ...
if you can run the Boto3 create_snapshot() (or the AWS CLI ec2 create-snapshot command) as part of your 'special script'. If not & you want to do it separately after doing send_command, you will also have to use the Boto3 ssm.get_command_invocation (or the AWS CLI ssm get-command-invocation command) to poll+wait for the command to finish, before you create your snapshot
What you decide to do is dependent on your specific requirements and how much infrastructure you want to manage, but the essence remains the same.
This could be a good starting point:
import boto3
def lambda_handler(event, context):
ssm = boto3.client('ssm')
commands = ["echo 'Command xxxxx of special script'"]
commands.append("aws ec2 create-snapshot --volume-id vol-yyyyy")
ssm.send_command(
InstanceIds=['i-zzzzz'],
DocumentName='AWS-RunShellScript',
Parameters={'commands': commands}
)

How can I attach a persistent EBS volume to an EC2 Linux launch template that is used in an autoscaling group?

To Clarify my Autoscaling group removes all instances and their root EBS volumes during inactive hours, then once inside active hours recreates and installs all necessary base programs. However I have a smaller EBS volume that is persistent and holds code and data I do not want getting wiped out during down times. I am currently manually attaching via the console and mounting every time I am working inside active hours using the commands below.
sudo mkdir userVolume
sudo mount /dev/xvdf userVolume
How can I automatically attach and mount this volume to a folder? This is all for the sake of minimizing cost and uptime to when I can actually be working on it.
Use this code:
#!/bin/bash
OUTPUT=$(curl http://169.254.169.254/latest/meta-data/instance-id)
aws ec2 attach-volume --volume-id vol-xxxxxxxxxxxx --device /dev/xvdf --instance-id $OUTPUT --region ap-southeast-1
Set your volume ID and region.
Refer this link for further details: https://aws.amazon.com/premiumsupport/knowledge-center/ec2-linux-spot-instance-attach-ebs-volume/

Configure shell script execution every hour in ec2 instance using terraform

I have a shell script which i wanted to configure in AWS ec2 instance to run every hour. Iam using terraform to launch the ec2 instance. Is it possible to configure the shell script hourly execution through terraform itself while launching the ec2?
Yes, in the aws_instance resource you can use the user_data argument to execute a script at launch that registers a cron job that executes hourly:
resource "aws_instance" "foo" {
ami = "ami-005e54dee72cc1d00" # us-west-2
instance_type = "t2.micro"
...
user_data = <<-EOF
sudo service cron start
echo '0 * * * * date >> ~/somefile' | crontab
EOF
}
Ensure that NTP is configured on the instance and that you are using UTC for the system time.
Helpful links
AWS EC2 Documentation, User Data
POSIX crontab
Terraform AWS provider, EC2 instance user_data

Attaching an EBS volume to AWS Batch Compute Environments

I want to set up AWS Batch running few python scripts to do some batch operations on file fetched from S3 and post processing they need to be saved to a volume.
For this I want to configure compute environments in AWS batch.
I wish to use spot instances but i need my EBS volume to be there even after instance is terminated and if new instance is spin up it has to mount same volume as used before.
Create a instance-template, provide a bootstrap script, for the mentioned case something like:
sudo mkdir -p /<any directory name where volume will be mounted eg: dir>
aws ec2 attach-volume --volume-id <volume_id> --instance-id $(wget -q -O - http://169.254.169.254/latest/meta-data/instance-id) --device /dev/sdf
sudo mount /dev/sdf /<above mentioned dir rg :dir>
in AWS batch definition, use the above template to launch your ec2 machine.

Mount EBS volume to a running AWS instance with a script

I'd like to dynamically mount and unmount an EBS volumes to a running AWS instance using a script and was wondering whether this is achievable, both on linux and windows instances, and if so, what is the expected duration of such operation.
Using AWS CLI and Bourne shell script.
attach-volume
Attaches an EBS volume to a running or stopped instance and exposes it
to the instance with the specified device name.
aws ec2 attach-volume --volume-id vol-1234567890abcdef0 --instance-id i-01474ef662b89480 --device /dev/sdf
detach-volume
Detaches an EBS volume from an instance. Make sure to unmount any file
systems on the device within your operating system before detaching
the volume.
aws ec2 detach-volume --volume-id vol-1234567890abcdef0
--------------------------------------------------------------------------
Use Python and Boto3 which has APIs to attach and detach volumes.
attach_volume
Attaches an EBS volume to a running or stopped instance and exposes it
to the instance with the specified device name.
import boto3
client = boto3.client('ec2')
response = client.attach_volume(
DryRun=True|False,
VolumeId='string',
InstanceId='string',
Device='string'
)
detach_volume
Detaches an EBS volume from an instance. Make sure to unmount any file
systems on the device within your operating system before detaching
the volume.
response = client.detach_volume(
DryRun=True|False,
VolumeId='string',
InstanceId='string',
Device='string',
Force=True|False
)