I'm trying to write a script that with the help of Jenkins will look at the updated files in git, download it and will encrypt them using AWS KMS. I have a working script that does it all and the file is downloaded to the Jenkins repository on local server. But my problem is encrypting this file in Jenkins repo. Basically, when I encrypt files on local computer, I use the command:
aws kms encrypt --key-id xxxxxxx-xxx-xxxx-xxxx-xxxxxxxxxx --plaintext fileb://file.json --output text --query CiphertextBlob | base64 --decode > Encrypted-data.json
and all is ok, but if I try to do it with Jenkins I get an error that the AWS command not found.
Does somebody know how to solve this problem and how do I make it run AWS through Jenkins?
Here is my working code which breaks down on the last line:
bom_sniffer() {
head -c3 "$1" | LC_ALL=C grep -qP '\xef\xbb\xbf';
if [ $? -eq 0 ]
then
echo "BOM SNIFFER DETECTED BOM CHARACTER IN FILE \"$1\""
exit 1
fi
}
check_rc() {
# exit if passed in value is not = 0
# $1 = return code
# $2 = command / label
if [ $1 -ne 0 ]
then
echo "$2 command failed"
exit 1
fi
}
# finding files that differ from this commit and master
echo 'git fetch'
check_rc $? 'echo git fetch'
git fetch
check_rc $? 'git fetch'
echo 'git diff --name-only origin/master'
check_rc $? 'echo git diff'
diff_files=`git diff --name-only $GIT_PREVIOUS_COMMIT $GIT_COMMIT | xargs`
check_rc $? 'git diff'
for x in ${diff_files}
do
echo "${x}"
cat ${x}
bom_sniffer "${x}"
check_rc $? "BOM character detected in ${x},"
aws configure kms encrypt --key-id xxxxxxx-xxx-xxxx-xxxx-xxxxxxxxxx --plaintext fileb://${x} --output text --query CiphertextBlob | base64 --decode > Encrypted-data.json
done
After discussion with you this is how this issue was resolved :
First corrected the command by removing configure from it.
Installed the awscli for the jenkins user :
pip install awscli --user
Used the absolute path of aws in your script
for eg. say if it's in ~/.local/bin/ use ~/.local/bin/aws kms encrypt --key-id xxxxxxx-xxx-xxxx-xxxx-xxxxxxxxxx --plaintext fileb://${x} --output text --query CiphertextBlob | base64 --decode > Encrypted-data.json in your script.
Or add the path of aws in PATH.
Related
We want to copy a docker image from non-prod to prod ECR account. Is it possible without pulling, retaging and pushing it again.
No you have to run these commands
docker login OLD_REPO
docker pull OLD_REPO/IMAGE:TAG
docker tag OLD_REPO/IMAGE:TAG NEW_REPO/IMAGE:TAG
docker login NEW_REPO
docker push NEW_REPO/IMAGE:TAG
I have written this program in python to migrate all the images (or a specific image) from a repository to another region or to another account in a different region
https://gist.github.com/fabidick22/6a1962697357360f0d73e01950ae962b
Answer: No, you must pull, tag, and push.
I wrote a bash script for this today. You can specify the number of tagged images that will be copied.
https://gist.github.com/virtualbeck/a635ef6701991f2087384eab7edbb18b
a slight improvement ( and may be couple of bug fixes on ) this answer: https://stackoverflow.com/a/69905254/65706
set -e
################################# UPDATE THESE #################################
LAST_N_TAGS=10
SRC_AWS_REGION="us-east-1"
TGT_AWS_REGION="eu-central-1"
SRC_AWS_PROFILE="your_source_aws_profile"
TGT_AWS_PROFILE="your_target_aws_profile"
SRC_BASE_PATH="386151140899.dkr.ecr.$SRC_AWS_REGION.amazonaws.com"
TGT_BASE_PATH="036149202915.dkr.ecr.$TGT_AWS_REGION.amazonaws.com"
#################################################################################
URI=($(aws ecr describe-repositories --profile $SRC_AWS_PROFILE --query 'repositories[].repositoryUri' --output text --region $SRC_AWS_REGION))
NAME=($(aws ecr describe-repositories --profile $SRC_AWS_PROFILE --query 'repositories[].repositoryName' --output text --region $SRC_AWS_REGION))
echo "Start repo copy: `date`"
# source account login
aws --profile $SRC_AWS_PROFILE --region $SRC_AWS_REGION ecr get-login-password | docker login --username AWS --password-stdin $SRC_BASE_PATH
# destination account login
aws --profile $TGT_AWS_PROFILE --region $TGT_AWS_REGION ecr get-login-password | docker login --username AWS --password-stdin $TGT_BASE_PATH
for i in ${!URI[#]}; do
echo "====> Grabbing latest $LAST_N_TAGS from ${NAME[$i]} repo"
# create ecr repo if one does not exist in destination account
aws ecr --profile $SRC_AWS_PROFILE --region $SRC_AWS_REGION describe-repositories --repository-names ${NAME[$i]} || aws ecr --profile $TGT_AWS_PROFILE --region $TGT_AWS_REGION create-repository --repository-name ${NAME[$i]}
for tag in $(aws ecr describe-images --repository-name ${NAME[$i]} \
--query 'sort_by(imageDetails,& imagePushedAt)[*]' \
--filter tagStatus=TAGGED --output text \
| grep IMAGETAGS | awk '{print $2}' | tail -$LAST_N_TAGS); do
# if [[ ${NAME[$i]} == "repo-name/frontend-nba" ]]; then
# continue
# fi
# # 386517340899.dkr.ecr.us-east-1.amazonaws.com/spectralha-api/white-ref-detector
# if [[ ${NAME[$i]} == "386351741199.dkr.ecr.us-east-1.amazonaws.com/repo-name/white-ref-detector" ]]; then
# continue
# fi
echo "START ::: pulling image ${URI[$i]}:$tag"
AWS_REGION=$SRC_AWS_REGION AWS_PROFILE=$SRC_AWS_PROFILE docker pull ${URI[$i]}:$tag
AWS_REGION=$SRC_AWS_REGION AWS_PROFILE=$SRC_AWS_PROFILE docker tag ${URI[$i]}:$tag $TGT_BASE_PATH/${NAME[$i]}:$tag
echo "STOP ::: pulling image ${URI[$i]}:$tag"
echo "START ::: pushing image $TGT_BASE_PATH/${NAME[$i]}:$tag"
# status=$(AWS_REGION=$TGT_AWS_REGION AWS_PROFILE=$TGT_AWS_PROFILE docker push $TGT_BASE_PATH/${NAME[$i]}:$tag)
# echo $status
AWS_REGION=$TGT_AWS_REGION AWS_PROFILE=$TGT_AWS_PROFILE docker push $TGT_BASE_PATH/${NAME[$i]}:$tag
echo "STOP ::: pushing image $TGT_BASE_PATH/${NAME[$i]}:$tag"
sleep 2
echo ""
done
# docker image prune -a -f #clean-up ALL the images on the system
done
echo "Finish repo copy: `date`"
echo "Don't forget to purge you local docker images!"
#Uncomment to delete all
#docker rmi $(for i in ${!NAME[#]}; do docker images | grep ${NAME[$i]} | tr -s ' ' | cut -d ' ' -f 3 | uniq; done) -f
I'm interested in creating a backup of the current application as an application version. As a result I'd like to try and track down the S3 Bucket where the App Versions are saved to save a new file to that location.
Does anyone know how this is done?
To do this:
Get the Instance Id
Get the environment Id.
Get the App Name
Get the App Versions
The data is in the response of the App Versions.
Based on the above, here is my entire backup script. It will figure out steps 1-4 above, then zip up the current app directory, send it to S3, and create a new app version based on it.
#/bin/bash
EC2_INSTANCE_ID="`wget -q -O - http://169.254.169.254/latest/meta-data/instance-id || die \"wget instance-id has failed: $?\"`"
test -n "$EC2_INSTANCE_ID" || die 'cannot obtain instance-id'
EC2_AVAIL_ZONE="`wget -q -O - http://169.254.169.254/latest/meta-data/placement/availability-zone || die \"wget availability-zone has failed: $?\"`"
test -n "$EC2_AVAIL_ZONE" || die 'cannot obtain availability-zone'
EC2_REGION="`echo \"$EC2_AVAIL_ZONE\" | sed -e 's:\([0-9][0-9]*\)[a-z]*\$:\\1:'`"
EB_ENV_ID="`aws ec2 describe-instances --instance-ids $EC2_INSTANCE_ID --region $EC2_REGION | grep -B 1 \"elasticbeanstalk:environment-id\" | grep -Po '\"Value\":\\s+\"[^\"]+\"' | cut -d':' -f 2 | grep -Po '[^\"]+'`"
EB_APP_NAME="`aws elasticbeanstalk describe-environments --environment-id $EB_ENV_ID --region $EC2_REGION | grep 'ApplicationName' | cut -d':' -f 2 | grep -Po '[^\\",]+'`"
EB_BUCKET_NAME="`aws elasticbeanstalk describe-application-versions --application-name $EB_APP_NAME --region $EC2_REGION | grep -m 1 'S3Bucket' | cut -d':' -f 2 | grep -Po '[^\", ]+'`"
DATE=$(date -d "today" +"%Y%m%d%H%M")
FILENAME=application.$DATE.zip
cd /var/app/current
sudo -u webapp zip -r -J /tmp/$FILENAME *
sudo mv /tmp/$FILENAME ~/.
sudo chmod 755 ~/$FILENAME
cd ~
aws s3 cp $FILENAME s3://$EB_BUCKET_NAME/
rm -f ~/$FILENAME
aws elasticbeanstalk create-application-version --application-name $EB_APP_NAME --version-label "$DATE-$FILENAME" --description "Auto-Save $DATE" --source-bundle S3Bucket="$EB_BUCKET_NAME",S3Key="$FILENAME" --no-auto-create-application --region $EC2_REGION
tldr: most probably you want
jq -r '..|.url?' /etc/elasticbeanstalk/metadata-cache | grep -oP '//\K(.*)(?=\.s3)
Detailed version:
Add to any of your .ebextensions
packages:
yum:
jq: []
(assuming you are using Amazon Linux or Centos)
and run
export BUCKET="$(jq -r '..|.url?' /etc/elasticbeanstalk/metadata-cache | grep -oP '//\K(.*)(?=\.s3)' || sudo !!)"
echo bucketname: $BUCKET
then full s3 path would be accessible at
bash -c 'source /etc/elasticbeanstalk/.aws-eb-stack.properties; echo s3:\\${BUCKET} -r ${region:-eu-east-3}'
I've recently been using the Swap root volume approach for creating a persistent Spot Instance, as described here (Approach 2). Typically it takes 2-5 minutes for my Spot Instance to be fulfilled and the Swap to complete. However, some days, the process never finishes (or at least I get impatient after waiting 20 minutes to an hour!).
To be clear, the Instance is created, but the Swap never happens: I can ssh into the server but my persistent files are not there. I also can see this by going to my AWS console and noting that "spotter" (my persistent storage) has no attachment information:
As the swapping script which I'm using never gives me any errors, it's hard to see what's failing. So, I'm wondering if based on my screenshot I can just use the AWS EC2 Management Console to "manually" perform the swap, and if so, how would I accomplish this.
And, if it helps #Vorsprung,
I initiate the process by running the following script:
# The config file was created in ondemand_to_spot.sh
export config_file=my.conf
cd "$(dirname ${BASH_SOURCE[0]})"
. ../$config_file || exit -1
export request_id=`../ec2spotter-launch $config_file`
echo Spot request ID: $request_id
echo Waiting for spot request to be fulfilled...
aws ec2 wait spot-instance-request-fulfilled --spot-instance-request-ids $request_id
export instance_id=`aws ec2 describe-spot-instance-requests --spot-instance-request-ids $request_id --query="SpotInstanceRequests[*].InstanceId" --output="text"`
echo Waiting for spot instance to start up...
aws ec2 wait instance-running --instance-ids $instance_id
echo Spot instance ID: $instance_id
echo 'Please allow the root volume swap script a few minutes to finish.'
if [ "x$ec2spotter_elastic_ip" = "x" ]
then
# Non elastic IP
export ip=`aws ec2 describe-instances --instance-ids $instance_id --filter Name=instance-state-name,Values=running --query "Reservations[*].Instances[*].PublicIpAddress" --output=text`
else
# Elastic IP
export ip=`aws ec2 describe-addresses --allocation-ids $ec2spotter_elastic_ip --output text --query 'Addresses[0].PublicIp'`
fi
export name=fast-ai
if [ "$ec2spotter_key_name" = "aws-key-$name" ]
then function aws-ssh-spot {
ssh -i ~/.ssh/aws-key-$name.pem ubuntu#$ip
}
function aws-terminate-spot {
aws ec2 terminate-instances --instance-ids $instance_id
}
echo Jupyter Notebook -- $ip:8888
fi
where my.conf is:
# Name of root volume.
ec2spotter_volume_name=spotter
# Location (zone) of root volume. If not the same as ec2spotter_launch_zone,
# a copy will be created in ec2spotter_launch_zone.
# Can be left blank, if the same as ec2spotter_launch_zone
ec2spotter_volume_zone=us-west-2b
ec2spotter_launch_zone=us-west-2b
ec2spotter_key_name=aws-key-fast-ai
ec2spotter_instance_type=p2.xlarge
# Some instance types require a subnet to be specified:
ec2spotter_subnet=subnet-c9cba8af
ec2spotter_bid_price=0.55
# uncomment and update the value if you want an Elastic IP
# ec2spotter_elastic_ip=eipalloc-64d5890a
# Security group
ec2spotter_security_group=sg-2be79356
# The AMI to be used as the pre-boot environment. This is NOT your target system installation.
# Do Not Modify this unless you have a need for a different Kernel version from what's supplied.
# ami-6edd3078 is ubuntu-xenial-16.04-amd64-server-20170113
ec2spotter_preboot_image_id=ami-bc508adc
and the ec2spotter-launch script is:
#!/bin/bash
# "Phase 1" this is the user-facing script for launching a new spot istance
if [ "$1" = "" ]; then echo "USER ERROR: please specify a configuration file"; exit -1; fi
cd $(dirname $0)
. $1 || exit -1
# New instance:
# Desired launch zone
LAUNCH_ZONE=$ec2spotter_launch_zone
# Region is LAUNCH_ZONE minus the last character
LAUNCH_REGION=$(echo $LAUNCH_ZONE | sed -e 's/.$//')
PUB_KEY=$ec2spotter_key_name
# Existing Volume:
# If no volume zone
if [ "$ec2spotter_volume_zone" = "" ]
then # Use instance zone
ec2spotter_volume_zone=$LAUNCH_ZONE
fi
# Name of volume (find it by name later)
ROOT_VOL_NAME=$ec2spotter_volume_name
# zone of volume (needed if different than instance zone)
ROOT_ZONE=$ec2spotter_volume_zone
# Region is Zone minus the last character
ROOT_REGION=$(echo $ROOT_ZONE | sed -e 's/.$//')
#echo "ROOT_VOL_NAME=${ROOT_VOL_NAME}; ROOT_ZONE=${ROOT_ZONE}; ROOT_REGION=${ROOT_REGION}; "
#echo "LAUNCH_ZONE=${LAUNCH_ZONE}; LAUNCH_REGION=${LAUNCH_REGION}; PUB_KEY=${PUB_KEY}"
AWS_ACCESS_KEY=`aws configure get aws_access_key_id`
AWS_SECRET_KEY=`aws configure get aws_secret_access_key`
aws ec2 describe-volumes \
--filters Name=tag-key,Values="Name" Name=tag-value,Values="$ROOT_VOL_NAME" \
--region ${ROOT_REGION} --output=json > volumes.tmp || exit -1
ROOT_VOL=$(jq -r '.Volumes[0].VolumeId' volumes.tmp)
ROOT_TYPE=$(jq -r '.Volumes[0].VolumeType' volumes.tmp)
#echo "ROOT_TYPE=$ROOT_TYPE; ROOT_VOL=$ROOT_VOL";
if [ "$ROOT_VOL_NAME" = "" ]
then
echo "root volume lacks a Name tag";
exit -1;
fi
cat >user-data.tmp <<EOF
#!/bin/sh
echo AWSAccessKeyId=$AWS_ACCESS_KEY > /root/.aws.creds
echo AWSSecretKey=$AWS_SECRET_KEY >> /root/.aws.creds
apt-get update
apt-get install -y jq
apt-get install -y python-pip python-setuptools
apt-get install -y git
pip install awscli
cd /root
git clone --depth=1 https://github.com/slavivanov/ec2-spotter.git
echo Got spotter scripts from github.
cd ec2-spotter
echo Swapping root volume
./ec2spotter-remount-root --force 1 --vol_name ${ROOT_VOL_NAME} --vol_region ${ROOT_REGION} --elastic_ip $ec2spotter_elastic_ip
EOF
userData=$(base64 user-data.tmp | tr -d '\n');
cat >specs.tmp <<EOF
{
"ImageId" : "$ec2spotter_preboot_image_id",
"InstanceType": "$ec2spotter_instance_type",
"KeyName" : "$PUB_KEY",
"EbsOptimized": true,
"Placement": {
"AvailabilityZone": "$LAUNCH_ZONE"
},
"BlockDeviceMappings": [
{
"DeviceName": "/dev/sda1",
"Ebs": {
"DeleteOnTermination": true,
"VolumeType": "gp2",
"VolumeSize": 128
}
}
],
"NetworkInterfaces": [
{
"DeviceIndex": 0,
"SubnetId": "${ec2spotter_subnet}",
"Groups": [ "${ec2spotter_security_group}" ],
"AssociatePublicIpAddress": true
}
],
"UserData" : "${userData}"
}
EOF
SPOT_REQUEST_ID=$(aws ec2 request-spot-instances --launch-specification file://specs.tmp --spot-price $ec2spotter_bid_price --output="text" --query="SpotInstanceRequests[*].SpotInstanceRequestId" --region ${LAUNCH_REGION})
echo $SPOT_REQUEST_ID
# Clean up
rm user-data.tmp
rm specs.tmp
rm volumes.tmp
This is not an exact answer, but it may help you to find the way to debug the issue.
As I understand, this is the part of your setup is in the ec2spotter-launch script responsible for volume swap:
...
cat >specs.tmp <<EOF
{
"ImageId" : "$ec2spotter_preboot_image_id",
...
"UserData" : "${userData}"
}
EOF
SPOT_REQUEST_ID=$(aws ec2 request-spot-instances --launch-specification file://specs.tmp --spot-price $ec2spotter_bid_price --output="text" --query="SpotInstanceRequests[*].SpotInstanceRequestId" --region ${LAUNCH_REGION})
The specs.tmp is used as instance launch specification: --launc-specification file:://specs.tmp.
And the "UserData" inside the launch specification is a script which is also generated in es2spotter-launch:
cat >user-data.tmp <<EOF
#!/bin/sh
echo AWSAccessKeyId=$AWS_ACCESS_KEY > /root/.aws.creds
echo AWSSecretKey=$AWS_SECRET_KEY >> /root/.aws.creds
apt-get update
...
cd /root
git clone --depth=1 https://github.com/slavivanov/ec2-spotter.git
echo Got spotter scripts from github.
cd ec2-spotter
echo Swapping root volume
./ec2spotter-remount-root --force 1 --vol_name ${ROOT_VOL_NAME} --vol_region ${ROOT_REGION} --elastic_ip $ec2spotter_elastic_ip
EOF
The actual work to swap the root volume is performed by the ec2spotter-remount-root script which is downloaded from github.
There are many echo statements in that script, so I think if you find where the output goes, you'll be able to understand what was wrong.
So when you have the issue, you'll ssh to the instance and check the log file.
The question is what file to check (and if the script output is being logged into some file).
Here is what I suggest to try:
Check standard logs under /var/log generated when the instance is starting (cloud-init.log, syslog, etc) to see if you can find the ec2spotter-remount-root output
Try to enable logging by yourself, something similar is discussed here
I would try modifying the user-data.tmp part in es2spotter-launch this way:
#!/bin/bash
set -x
exec > >(tee /var/log/user-data.log|logger -t user-data ) 2>&1
echo AWSAccessKeyId=$AWS_ACCESS_KEY > /root/.aws.creds
...
echo Swapping root volume
./ec2spotter-remount-root --force 1 --vol_name ${ROOT_VOL_NAME} --vol_region ${ROOT_REGION} --elastic_ip $ec2spotter_elastic_ip
EOF
Here I've changed first three lines to enable logging into /var/log/user-data.log.
If 1 and 2 don't work, I would try asking script author on github. As there are lots of echos in the script, the author should know where to look for that output.
Hope that helps, also you don't need to wait for the issue to appear to try this out, instead look for the script output on successful runs too.
Or, if you are able to make few test runs, then do that and make sure you can find the log with script output.
I try to encode and decode plaintext using aws kms encrypt and decrypt.But it showing a following error:
aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
To see help text, you can run:
aws help
aws <command> help
aws <command> \<subcommand> help
**Unknown options: --decode, >, ExampleEncryptedFile.txt, base64"**
Commands which i used:
**aws kms encrypt --key-id 1234abcd-12ab-34cd-56ef-1234567890ab --
plaintext mysecretpassword --output text --query
CiphertextBlob | base64 --decode > ExampleEncryptedFile**
If i use like the below it works:
**aws kms encrypt --key-id 1234abcd-12ab-34cd-56ef-1234567890ab --
plaintext fileb://ExamplePlaintextFile --output text --query
CiphertextBlob**
Decode also facing issue like: **InvalidCiphertextException error**
Thanks in advance!
Try these:
#Encrypt
#!/bin/bash
if [ -z $3 ]
then
echo 'Encrypt a file with AWS KMS'
echo 'Usage: ./encrypt.sh <inputfile> <outputfile>'
echo 'Example: ./encrypt.sh input.txt output.txt'
exit 1
fi
aws kms encrypt --key-id alias/**SOME_KEY_ALIAS** --plaintext fileb://$1 --output text --query CiphertextBlob | base64 --decode > $2
#####
# Decrypt
#!/bin/bash
if [ -z $2 ]
then
echo 'Decrypt a file with AWS KMS'
echo 'Usage: ./decrypt.sh <inputfile> <outputfile>'
echo 'Example: ./decrypt.sh input.txt output.txt'
exit 1
fi
aws kms decrypt --ciphertext-blob fileb://$1 --output text --query Plaintext | base64 --decode > $2
I have a S3 bucket with versioning enabled. It is possible to undelete files, but how can I undelete folders?
I know, S3 does not have folders... but how can I undelete common prefixes? Is there a possibility to undelete files recursively?
I created this simple bash script to restore all the files in an S3 folder I deleted:
#!/bin/bash
recoverfiles=$(aws s3api list-object-versions --bucket MyBucketName --prefix TheDeletedFolder/ --query "DeleteMarkers[?IsLatest && starts_with(LastModified,'yyyy-mm-dd')].{Key:Key,VersionId:VersionId}")
for row in $(echo "${recoverfiles}" | jq -c '.[]'); do
key=$(echo "${row}" | jq -r '.Key' )
versionId=$(echo "${row}" | jq -r '.VersionId' )
echo aws s3api delete-object --bucket MyBucketName --key $key --version-id $versionId
done
yyyy-mm-dd = the date the folder was deleted
I found a satisfying solution here, which is described in more details here.
To sum up, there is no out-of-the-box tool for this, but a simple bash script wraps the AWS tool "s3api" to achieve the recursive undelete.
The solution worked for me. The only drawback I found is, that Amazon seems to throttle the restore operations after about 30.000 files.
You cannot undelete a common prefix. You would need to undelete one object at a time. When an object appears, any associated folder will also reappear.
Undeleting can be accomplished in two ways:
Delete the Delete Marker that will reverse the deletion, or
Copy a previous version of the object to itself, which will make the newest version newer than the Delete Marker, so it will reappear. (I hope you understood that!)
If a folder and its contents are deleted you can recover them using the below script inspired by a previous answer
The script is applicable to an S3 bucket where versioning is enabeled before hand. It uses the delete marker tag to restore files in an S3 prefix.
#!/bin/bash
# Inspired by https://www.dmuth.org/how-to-undelete-files-in-amazon-s3/
# This script can be used to undelete objects from an S3 bucket.
# When run, it will print out a list of AWS commands to undelete files, which you
# can then pipe into Bash.
#
#
# You will need the AWS CLI tool from https://aws.amazon.com/cli/ in order to run this script.
#
# Note that you must have the following permissions via IAM:
#
# Bucket permissions:
#
# s3:ListBucket
# s3:ListBucketVersions
#
# File permissions:
#
# s3:PutObject
# s3:GetObject
# s3:DeleteObject
# s3:DeleteObjectVersion
#
# If you want to do this in a "quick and dirty manner", you could just grant s3:* to
# the account, but I don't really recommend that.
#
# profile = company
# bucket = company-s3-bucket
# prefix = directory1/directory2/directory3/lastdirectory/
# pattern = (.*)
# USAGE
# bash undelete.sh > recover_files.txt | bash
read -p "Enter your aws profile: " PROFILE
read -p "Enter your S3 bucket name: " BUCKET
read -p "Enter your S3 directory/prefix to be recovered from, leave empty for to recover all of the S3 bucket: " PREFIX
read -p "Enter the file pattern looking to recover, leave empty for all: " PATTERN
# Make sure Profile and Bucket are entered
[[ -z "$PROFILE" ]] && { echo "Profile is empty" ; exit 1; }
[[ -z "$BUCKET" ]] && { echo "Bucket is empty" ; exit 1; }
# Fill PATTERN to match all if empty
PATTERN=${PATTERN:-(.*)}
# Errors are fatal
set -e
if [ "$PREFIX" = "" ];
# To recover all of the S3 bucket
then
aws --profile ${PROFILE} --output text s3api list-object-versions --bucket ${BUCKET} \
| grep -i $PATTERN \
| grep -E "^DELETEMARKERS" \
| awk -v PROFILE=$PROFILE -v BUCKET=$BUCKET -v PREFIX=$PREFIX \
-F "[\t]+" '{ print "aws --profile " PROFILE " s3api delete-object --bucket " BUCKET "--key \""$3"\" --version-id "$5";"}'
# To recover a directory
else
aws --profile ${PROFILE} --output text s3api list-object-versions --bucket ${BUCKET} --prefix ${PREFIX} \
| grep -E $PATTERN \
| grep -E "^DELETEMARKERS" \
| awk -v PROFILE=$PROFILE -v BUCKET=$BUCKET -v PREFIX=$PREFIX \
-F "[\t]+" '{ print "aws --profile " PROFILE " s3api delete-object --bucket " BUCKET "--key \""$3"\" --version-id "$5";"}'
fi