I have a AWS EC2 instance That i need to manually access to the AWS console and make a daily image of the machine(AMI)
How i can make a daily AMI backup of the machine and delete old version (old then 7 days)?
Thank you!
Anything that you can do through the web console you can also do through the CLI.
In this particular case, I suspect a combination of aws ec2 create-image, aws ec2 describe-images, and aws ec2 deregister-image would let you do what you want.
AWS lambda would be a right solution to automate the backup of your ami and clean up. You can schedule the lambda function (basically a python code) to run periodically. This way you don't need to have your ec2 running all the time. An example here http://powerupcloud.azurewebsites.net/2016/10/15/serverless-automate-ami-creation-and-deletion-using-aws-lambda/
Below is a shell script I use that runs daily via cron. You can set the value of a variable prevday1 to set how long you want to keep your images. In your case you want 7 days to it would be
prevday1=$(date --date="7 days ago" +%Y-%m-%d)
Here is the full script:
#!/bin/bash
# prior to using this script you will need to install the aws cli on the local machine
# https://docs.aws.amazon.com/AmazonS3/latest/dev/setup-aws-cli.html
# AWS CLI - will need to configure this
# sudo apt-get -y install awscli
# example of current config - july 10, 2020
#aws configure
#aws configure set key ARIAW5YUMJT7PO2N7L *fake - user your own*
#aws configure secret X2If+xa/rFITQVMrgdQVpFLx1c7fwP604QkH/x *fake - user your own*
#aws configure set region us-east-2
#aws configure set format json
# backup EC2 instances nightly 4:30 am GMT
# 30 4 * * * . $HOME/.profile; /var/www/devopstools/shell-scripts/file_backup_scripts/ec2_backup.sh
script_dir="$(dirname "$0")"
# If you want live notifications about backups
#SLACK_API_URL="https://hooks.slack.com/services/T6VQ93KM/BT8REK5/hFYEDUCoO1Bw72wxxFSj7oY"
source "$script_dir/includes/helpers.sh"
prevday1=$(date --date="2 days ago" +%Y-%m-%d)
prevday2=$(date --date="3 days ago" +%Y-%m-%d)
today=$(date +"%Y-%m-%d")
instances=()
# add as many instances to backup as needed
instances+=("autobackup_impressto|i-0ed78a1f3583e1859")
for ((i = 0; i < ${#instances[#]}; i++)); do
instance=${instances[$i]}
instanceName="$(cut -d'|' -f1 <<<"$instance")"
instanceId="$(cut -d'|' -f2 <<<"$instance")"
prevImageName1="${instanceName}_${prevday1}_$instanceId"
prevImageName2="${instanceName}_${prevday2}_$instanceId"
newImageName="${instanceName}_${today}_$instanceId"
consoleout --green "Begin backing $instanceName [$instanceId]"
aws ec2 create-image \
--instance-id $instanceId \
--name "$newImageName" \
--description "$instanceName" \
--no-reboot
if [ $? -eq 0 ]; then
echo "$newImageName created."
echo ""
if [ ! -z "${SLACK_API_URL}" ]; then
curl -X POST -H 'Content-type: application/json' --data '{"text":":rotating_light: Backing up *'$newImageName'* to AMI. :rotating_light:"}' ${SLACK_API_URL} fi
echo -e "\e[92mBacking up ${newImageName} to AMI."
else
echo "$newImageName not created."
echo ""
fi
imageId=$(aws ec2 describe-images --filters "Name=name,Values=${prevImageName1}" --query 'Images[*].[ImageId]' --output text)
if [ ! -z "${imageId}" ]; then
echo "Deregistering ${prevImageName1} [${imageId}]"
echo ""
echo "aws ec2 deregister-image --image-id ${imageId}"
aws ec2 deregister-image --image-id ${imageId}
fi
imageId=$(aws ec2 describe-images --filters "Name=name,Values=${prevImageName2}" --query 'Images[*].[ImageId]' --output text)
if [ ! -z "${imageId}" ]; then
echo "Deregistering ${prevImageName2} [${imageId}]"
echo ""
echo "aws ec2 deregister-image --image-id ${imageId}"
aws ec2 deregister-image --image-id ${imagesId}
fi
consoleout --green "Completed backing $instanceName"
done
Also available here - https://impressto.net/automatic-aws-ec2-backups/
You can use https://github.com/alestic/ec2-consistent-snapshot and run it in a cron job. It supports various filesystems and has support for ensuring database snapshots are consistent. If you don't have a database in your instance, it will still ensure consistent snapshots by freezing the filesystem.
Related
I run falco and falcosidekick with docker compose, without k8s.
I need to retrive aws instance metadata to falco rules output.
I've found the jevt field class but I encountered an error on falco container start
Invalid output format 'command=%jevt.value[/awsRegion': 'invalid formatting token jevt.value[/awsRegion']
Here my rules:
- rule: Terminal shell in container
desc: A shell was used as the entrypoint/exec point into a container with an attached terminal.
condition: >
spawned_process and container
and shell_procs and proc.tty != 0
and container_entrypoint
and not user_expected_terminal_shell_in_container_conditions
output: >
command=%jevt.value["/awsRegion"]
priority: NOTICE
tags: [ container, shell, mitre_execution ]
How can I do?
Thank you
several things to know:
the syntax for jevt.value is jevt.value[/awsRegion] (no quotes)
these kind fields are for events in json format, it works for kubernetes audit logs but in your case where the rule is based on syscalls
falco will not query aws metadata either, you will not have this information in your output like this
Regards,
Falco doesn't query AWS metadata, so I retrieved the metadata with an aws cli describe-instances and passed the metadata to falcosidekick container.
#loading EC2 metadata
INSTANCE_ID=$(curl http://169.254.169.254/latest/meta-data/instance-id)
INSTANCE_IP=$(aws ec2 describe-instances --instance-id "$INSTANCE_ID" --region eu-west-1 --query 'Reservations[*].Instances[*].{InstanceIp:PublicIpAddress}' --output text)
CLUSTER_NAME=$(aws ec2 describe-instances --instance-id "$INSTANCE_ID" --region eu-west-1 --query 'Reservations[*].Instances[*].{ClusterName:Tags[?Key==`Name`]|[0].Value}' --output text)
docker run -d -p 2801:2801 -d \
-e CUSTOMFIELDS=INSTANCE_ID:"$INSTANCE_ID",INSTANCE_IP:"$INSTANCE_IP",CLUSTER_NAME:"$CLUSTER_NAME" \
--name falcosidekick \
falcosecurity/falcosidekick
How can we save AWS RDS manual snapshots on the s3 bucket(on the same account)?
Is Aws will charge for automated RDS snapshots?
Do you guys have a solution for this?
Thanks in advance.
How can we save AWS RDS manual snapshots on the s3 bucket(on the same
account)?
You cannot. AWS does not provide access to the raw data of snapshots.
Is Aws will charge for automated RDS snapshots?
Yes, AWS charges for the storage space that snapshots use.
RDS snapshots are only accessible through the RDS console / CLI.
If you want to export data to your own S3 bucket, you'll need to grab that information directly from the database instance. Something like a mysqldump, etc
If you use automated snapshots then AWS will charge you for those.
This is a script i've used in the past to backup a MySQL/Aurora RDS to an S3 bucket:
#!/usr/bin/env bash
set -o errexit
set -o pipefail
set -o nounset
function log {
echo "[`date '+%Y-%m-%d %H:%M:%S.%N'`] $1"
}
: ${MYSQL_USER:=root}
: ${MYSQL_PASS:=root}
: ${MYSQL_HOST:=127.0.0.1}
: ${MYSQL_PORT:=3306}
if [ -z "${AWS_S3_BUCKET-}" ]; then
log "The AWS_S3_BUCKET variable is empty or not set"
exit 1;
fi;
EXCLUDED_DATABASES=(Database information_schema mysql performance_schema sys tmp innodb)
YEAR=$(date '+%Y')
MONTH=$(date '+%m')
DAY=$(date '+%d')
TIME=$(date '+%H-%M-%S')
if [ -z "${MYSQL_DATABASE-}" ]; then
DATABASES=$(/usr/bin/mysql --host="$MYSQL_HOST" --port="$MYSQL_PORT" --user="$MYSQL_USER" --password="$MYSQL_PASS" -e "SHOW DATABASES;" | cut -d ' ' -f 1)
else
DATABASES=("$MYSQL_DATABASE")
fi;
for DATABASE in $DATABASES; do
for EXCLUDED in ${EXCLUDED_DATABASES[#]}; do
if [ "$DATABASE" == "$EXCLUDED" ]; then
log "Excluded mysqlbackup of $DATABASE"
continue 2
fi;
done
log "Starting mysqlbackup of $DATABASE"
AWS_S3_PATH="s3://$AWS_S3_BUCKET/path-to-folder/$DATABASE.sql.gz"
mysqldump --host="$MYSQL_HOST" --port="$MYSQL_PORT" --user="$MYSQL_USER" --password="$MYSQL_PASS" "$DATABASE" | gzip | AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY aws s3 cp - "$AWS_S3_PATH"
log "Completed mysqlbackup of $DATABASE to $AWS_S3_PATH"
done
I have followed the below code from the AWS to start a ECS task when the EC2 instance launches. This works great.
However my containers only run for a few minutes(max ten) then once finished the EC# is shutdown using a cloudwatch rule.
The problem I am find is that due to the instances shutting down straight after the task is finished the auto clean up of the docker containers doesn't happen resulting in the EC2 instance getting full up stopping other tasks to fail. I have tried the lower the time between clean up but it still can be a bit flaky.
Next idea was to add docker system prune -a -f to the user data of the EC2 instance but it doesnt seem to get ran. I think its because I am putting it in the wrong part of the user data, I have searched through the docs for this but cant find anything to help.
Question where can I put the docker prune command in the user data to ensure that at each launch the prune command is ran?
--==BOUNDARY==
Content-Type: text/x-shellscript; charset="us-ascii"
#!/bin/bash
# Specify the cluster that the container instance should register into
cluster=your_cluster_name
# Write the cluster configuration variable to the ecs.config file
# (add any other configuration variables here also)
echo ECS_CLUSTER=$cluster >> /etc/ecs/ecs.config
# Install the AWS CLI and the jq JSON parser
yum install -y aws-cli jq
--==BOUNDARY==
Content-Type: text/upstart-job; charset="us-ascii"
#upstart-job
description "Amazon EC2 Container Service (start task on instance boot)"
author "Amazon Web Services"
start on started ecs
script
exec 2>>/var/log/ecs/ecs-start-task.log
set -x
until curl -s http://localhost:51678/v1/metadata
do
sleep 1
done
# Grab the container instance ARN and AWS region from instance metadata
instance_arn=$(curl -s http://localhost:51678/v1/metadata | jq -r '. | .ContainerInstanceArn' | awk -F/ '{print $NF}' )
cluster=$(curl -s http://localhost:51678/v1/metadata | jq -r '. | .Cluster' | awk -F/ '{print $NF}' )
region=$(curl -s http://localhost:51678/v1/metadata | jq -r '. | .ContainerInstanceArn' | awk -F: '{print $4}')
# Specify the task definition to run at launch
task_definition=my_task_def
# Run the AWS CLI start-task command to start your task on this container instance
aws ecs start-task --cluster $cluster --task-definition $task_definition --container-instances $instance_arn --started-by $instance_arn --region $region
end script
--==BOUNDARY==--
I hadn't considered terminated then creating a new instance.
I use cloud formation currently to create EC2.
What's the best workflow for terminating an EC2 after the task definition has completed then on schedule create a new one registering it to the ECS cluster?
Cloud watch scheduled rule to start lambda that creates EC2 then registers to cluster?
I have found a script for starting/stopping a dynamically created ec2 instance, but how do I start any instances in my inventory?
Seems you are talking about scripting, not SDK. So there are two tools to do the job.
1 AWS CLI tools
download aws cli tool and set the API Key in $HOME/.aws/credentials
list all instances on region us-east-1
Confirm which instances you are targeting.
aws ec2 describe-instances --query 'Reservations[].Instances[].InstanceId' --region us-east-1 --output text
2 Amazon EC2 Command Line Interface Tools
download and setup instruction
list all instances on region us-east-1
You should get same output as WAY #1.
ec2-describe-instances --region us-west-2 |awk '/INSTANCE/{print $2}'
With the instance ID list, you can use your command to start them one by one.
for example, the instance name are saved in file instance.list
while read instance
do
echo "Starting instance $instance ..."
ec2-start-instances "$linstance"
done < instance.list
BMW, give you an excellent startup, but you can even summarise the thing like this:
1) First get the id of all the instances and save them into a file
aws ec2 describe-instances --query 'Reservations[].Instances[].InstanceId' --region us-east-1 --output text >> id.txt
2) Then simply run this command to start all the instances
for id in $(awk '{print $1}' id.txt); do echo "starting the following instance $id"; aws ec2 start-instances --instance-ids --region us-east-1 $id; done
Please change the region, I am considering that you have installed and setup the AWS CLI tools properly. Thanks
Imagine you have a set of ebs volumes for data and you are frequently mounting these SAME set of EBS volumes to a ec2 node that changes over time (because you kill it every time you do not need it anymore and create a new one when you need it again) but on every creation ec2 instance could have different virtype, OS, instance types an so on (for whatever reason), what is the best way to automatically mount these EBS volumes on this a given ec2 instance when all you have is the ebs volume id and access to ec2 api to get the ebs device name?
Any program available to do so?
Btw, I am not talking about attaching the volumes and interested in automatically mounting to known directories on the os file system on instance creation given that the device name varies from os to os when compared to device name on ec2 and also it is preferred to use UUID in /etc/fstab instead of device name.
Use filesystem labels:
$ tune2fs -L "disk1" /dev/xvdf
$ tune2fs -L "disk2" /dev/xvdg
In your /etc/fstab:
LABEL=disk1 /disk1 auto defaults 0 2
LABEL=disk2 /disk2 auto defaults 0 2
In you /etc/rc.local:
# Note: You could store the volume-ids and devices in the ec2 tags of your instance.
INSTANCE_ID=$(curl http://169.254.169.254/latest/meta-data/instance-id)
export AWS_DEFAULT_REGION=$(curl http://169.254.169.254/latest/meta-data/placement/availability-zone | sed 's/[a-z]$//')
aws ec2 attach-volume --volume-id vol-1234abcd --instance-id $INSTANCE_ID --device /dev/xvdf
aws ec2 attach-volume --volume-id vol-1234abcf --instance-id $INSTANCE_ID --device /dev/xvdg
# wait for them to mount
until [ "$(aws ec2 describe-volume-status --volume-id vol-1234abcd --query 'VolumeStatuses[0].VolumeStatus.Status' --output text)" = ok ]; do sleep 5; done
until [ "$(aws ec2 describe-volume-status --volume-id vol-1234abcf --query 'VolumeStatuses[0].VolumeStatus.Status' --output text)" = ok ]; do sleep 5; done
# mount /etc/fstab entries
mount -a
# I also store the EIP as a tag
EIP="$(aws ec2 describe-instances --instance-id $INSTANCE_ID --query 'Reservations[*].Instances[*].[Tags[?Key==`EIP`]|[0].Value]' --output text)"
if [ $? -eq 0 ] && [ "$EIP" != "" ] && [ "$EIP" != "None" ]; then
aws ec2 associate-address --instance-id $INSTANCE_ID --public-ip "$EIP" --query 'return' --output text
fi
You could script this using AWS CLI and the command attach-volume.
From the AWS CLI example your command would look similar to:
aws ec2 attach-volume --volume-id vol-1234abcd --instance-id i-abcd1234 --device /dev/sdf
I would also suggest creating an IAM role and attaching it to the ec2 instances that you launch so that you do not have to put any IAM users' credentials on the instance.
You mentioned that you may be attaching the volume to different Operating Systems across ec2 launches, in that case all the OSs would have to support the filesystem type of the partitions on the volume that they wish to mount.