How can we save AWS RDS manual snapshots on the s3 bucket(on the same account)?
Is Aws will charge for automated RDS snapshots?
Do you guys have a solution for this?
Thanks in advance.
How can we save AWS RDS manual snapshots on the s3 bucket(on the same
account)?
You cannot. AWS does not provide access to the raw data of snapshots.
Is Aws will charge for automated RDS snapshots?
Yes, AWS charges for the storage space that snapshots use.
RDS snapshots are only accessible through the RDS console / CLI.
If you want to export data to your own S3 bucket, you'll need to grab that information directly from the database instance. Something like a mysqldump, etc
If you use automated snapshots then AWS will charge you for those.
This is a script i've used in the past to backup a MySQL/Aurora RDS to an S3 bucket:
#!/usr/bin/env bash
set -o errexit
set -o pipefail
set -o nounset
function log {
echo "[`date '+%Y-%m-%d %H:%M:%S.%N'`] $1"
}
: ${MYSQL_USER:=root}
: ${MYSQL_PASS:=root}
: ${MYSQL_HOST:=127.0.0.1}
: ${MYSQL_PORT:=3306}
if [ -z "${AWS_S3_BUCKET-}" ]; then
log "The AWS_S3_BUCKET variable is empty or not set"
exit 1;
fi;
EXCLUDED_DATABASES=(Database information_schema mysql performance_schema sys tmp innodb)
YEAR=$(date '+%Y')
MONTH=$(date '+%m')
DAY=$(date '+%d')
TIME=$(date '+%H-%M-%S')
if [ -z "${MYSQL_DATABASE-}" ]; then
DATABASES=$(/usr/bin/mysql --host="$MYSQL_HOST" --port="$MYSQL_PORT" --user="$MYSQL_USER" --password="$MYSQL_PASS" -e "SHOW DATABASES;" | cut -d ' ' -f 1)
else
DATABASES=("$MYSQL_DATABASE")
fi;
for DATABASE in $DATABASES; do
for EXCLUDED in ${EXCLUDED_DATABASES[#]}; do
if [ "$DATABASE" == "$EXCLUDED" ]; then
log "Excluded mysqlbackup of $DATABASE"
continue 2
fi;
done
log "Starting mysqlbackup of $DATABASE"
AWS_S3_PATH="s3://$AWS_S3_BUCKET/path-to-folder/$DATABASE.sql.gz"
mysqldump --host="$MYSQL_HOST" --port="$MYSQL_PORT" --user="$MYSQL_USER" --password="$MYSQL_PASS" "$DATABASE" | gzip | AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY aws s3 cp - "$AWS_S3_PATH"
log "Completed mysqlbackup of $DATABASE to $AWS_S3_PATH"
done
Related
I am unable to download complete logs using AWS CLI for an postgres RDS instance -
aws rds download-db-log-file-portion \
--db-instance-identifier $INSTANCE_ID \
--starting-token 0 --output text \
--max-items 99999999 \
--log-file-name error/postgresql.log.$CDATE-$CHOUR > DB_$INSTANCE_ID-$CDATE-$CHOUR.log
The log files which I see in console shows it's of ~10GB but using the CLI I always get a log file of just ~100MB.
Ref - https://docs.aws.amazon.com/cli/latest/reference/rds/download-db-log-file-portion.html
AWS docs -
In order to download the entire file, you need --starting-token 0
parameter:
aws rds download-db-log-file-portion --db-instance-identifier test-instance \
--log-file-name log.txt --starting-token 0 --output text > full.txt
Can someone please suggest.
You can download the complete file via AWS Console.
But, if you can download via script, I recommend use this method:
https://linuxtut.com/en/fdabc8bc82d7183a05f3/
Please, update variables values into script.
profile = "default"
instance_id = "database-1"
region = "ap-northeast-1"
I've found it much simpler to just curl the link the console gives you. Just output it to a file and you're done. Make sure you have ample space on disk.
I have a jenkins job which uploads a pretty small bash file(less than <1mb) to s3 bucket. It works most of the time but fails once in a while with the following error:
upload failed: build/xxxxxxx/test.sh The read operation timed out
Above error is directly from the aws cli operation. I am thinking, It could either be some network issue or maybe disk read operation is not available at the time. How do I set the option to retry it if this happens? Also, Is there a timeout I can increase? I searched the cli documentation, googled, and checked out 'aws s3api' but don't see any such an option.
If such an option does not exist.Then, How do folks get around this? Wrap the command to check the error code and reattempt?
End up writing wrapper around s3 command to retry and also get debug stack on last attempt. Might help folks.
# Purpose: Allow retry while uploading files to s3 bucket
# Params:
# \$1 : local file to copy to s3
# \$2 : s3 bucket path
# \$3 : AWS bucket region
#
function upload_to_s3 {
n=0
until [ \$n -gt 2 ]
do
if [ \$n -eq 2 ]; then
aws s3 cp --debug \$1 \$2 --region \$3
return \$?
else
aws s3 cp \$1 \$2 --region \$3 && break
fi
n=\$[\$n+1]
sleep 30
done
}
I have a AWS EC2 instance That i need to manually access to the AWS console and make a daily image of the machine(AMI)
How i can make a daily AMI backup of the machine and delete old version (old then 7 days)?
Thank you!
Anything that you can do through the web console you can also do through the CLI.
In this particular case, I suspect a combination of aws ec2 create-image, aws ec2 describe-images, and aws ec2 deregister-image would let you do what you want.
AWS lambda would be a right solution to automate the backup of your ami and clean up. You can schedule the lambda function (basically a python code) to run periodically. This way you don't need to have your ec2 running all the time. An example here http://powerupcloud.azurewebsites.net/2016/10/15/serverless-automate-ami-creation-and-deletion-using-aws-lambda/
Below is a shell script I use that runs daily via cron. You can set the value of a variable prevday1 to set how long you want to keep your images. In your case you want 7 days to it would be
prevday1=$(date --date="7 days ago" +%Y-%m-%d)
Here is the full script:
#!/bin/bash
# prior to using this script you will need to install the aws cli on the local machine
# https://docs.aws.amazon.com/AmazonS3/latest/dev/setup-aws-cli.html
# AWS CLI - will need to configure this
# sudo apt-get -y install awscli
# example of current config - july 10, 2020
#aws configure
#aws configure set key ARIAW5YUMJT7PO2N7L *fake - user your own*
#aws configure secret X2If+xa/rFITQVMrgdQVpFLx1c7fwP604QkH/x *fake - user your own*
#aws configure set region us-east-2
#aws configure set format json
# backup EC2 instances nightly 4:30 am GMT
# 30 4 * * * . $HOME/.profile; /var/www/devopstools/shell-scripts/file_backup_scripts/ec2_backup.sh
script_dir="$(dirname "$0")"
# If you want live notifications about backups
#SLACK_API_URL="https://hooks.slack.com/services/T6VQ93KM/BT8REK5/hFYEDUCoO1Bw72wxxFSj7oY"
source "$script_dir/includes/helpers.sh"
prevday1=$(date --date="2 days ago" +%Y-%m-%d)
prevday2=$(date --date="3 days ago" +%Y-%m-%d)
today=$(date +"%Y-%m-%d")
instances=()
# add as many instances to backup as needed
instances+=("autobackup_impressto|i-0ed78a1f3583e1859")
for ((i = 0; i < ${#instances[#]}; i++)); do
instance=${instances[$i]}
instanceName="$(cut -d'|' -f1 <<<"$instance")"
instanceId="$(cut -d'|' -f2 <<<"$instance")"
prevImageName1="${instanceName}_${prevday1}_$instanceId"
prevImageName2="${instanceName}_${prevday2}_$instanceId"
newImageName="${instanceName}_${today}_$instanceId"
consoleout --green "Begin backing $instanceName [$instanceId]"
aws ec2 create-image \
--instance-id $instanceId \
--name "$newImageName" \
--description "$instanceName" \
--no-reboot
if [ $? -eq 0 ]; then
echo "$newImageName created."
echo ""
if [ ! -z "${SLACK_API_URL}" ]; then
curl -X POST -H 'Content-type: application/json' --data '{"text":":rotating_light: Backing up *'$newImageName'* to AMI. :rotating_light:"}' ${SLACK_API_URL} fi
echo -e "\e[92mBacking up ${newImageName} to AMI."
else
echo "$newImageName not created."
echo ""
fi
imageId=$(aws ec2 describe-images --filters "Name=name,Values=${prevImageName1}" --query 'Images[*].[ImageId]' --output text)
if [ ! -z "${imageId}" ]; then
echo "Deregistering ${prevImageName1} [${imageId}]"
echo ""
echo "aws ec2 deregister-image --image-id ${imageId}"
aws ec2 deregister-image --image-id ${imageId}
fi
imageId=$(aws ec2 describe-images --filters "Name=name,Values=${prevImageName2}" --query 'Images[*].[ImageId]' --output text)
if [ ! -z "${imageId}" ]; then
echo "Deregistering ${prevImageName2} [${imageId}]"
echo ""
echo "aws ec2 deregister-image --image-id ${imageId}"
aws ec2 deregister-image --image-id ${imagesId}
fi
consoleout --green "Completed backing $instanceName"
done
Also available here - https://impressto.net/automatic-aws-ec2-backups/
You can use https://github.com/alestic/ec2-consistent-snapshot and run it in a cron job. It supports various filesystems and has support for ensuring database snapshots are consistent. If you don't have a database in your instance, it will still ensure consistent snapshots by freezing the filesystem.
I need to upload the updated files into multiple ec2 instace which is under single LB. My problem is I missed some ec2 instance and it broke my webpage.
Is there any tool available to upload the multiple files to multiple EC2 windows server in a single click.
I will update my files weekly or some times daily. I checked with Elastic beanstalk , Amazon Code Deploy and Amazon EFS. But the are hard to use. Anyone please help
I will suggest use AWS S3 and AWS CLI. What you can do is install AWS CLI on all the EC2 instance. Create a Bucket in AWS S3.
Start a Cron Job on each EC2 instance with below syntax.
aws s3 sync s3://bucket-name/folder-on-bucket /path/to/local/folder
So what will happen is when you upload new images to the S3 bucket all images will automatically sync with all the EC2 instances behind your load balancer. And also AWS s3 will be central directory where you upload and delete images.
You could leverage the AWS CLI, you could run something like
aws elb describe-load-balancers --load-balancer-name <name_of_your_lb> --query LoadBalancerDescriptions[].Instances --output text |\
xargs -I {} aws ec2 describe-instances --instance-id {} --query Reservations[].Instances[].PublicIpAddress |\
xargs -I {} scp <name_of_your_file> <your_username>#{}:/some/remote/directory
basically it goes like this:
find out all the ec2 instances connected to your Load Balancer
for each of the ec2 instances, find out the PublicIPAddress (supposedly you have since you can connect to them through scp)
run scp command to copy 1 files somewhere on the ec2 server
you can copy also copy folder if you need to push many files , it might be easier
Amazon ElasticFileSystem would probably now be the easiest option, you would create your file system and attach it to all your ec2 instances that are attached to the Load Balancer, and when you transfer files to the EFS it will be available to all the ec2 instances where the EFS is attached
(the setup to create EFS and mount it to your ec2 instances has to be done once only)
Create a script containing some robocopy commands and run it when you want to update the files on your servers. Something like this:
robocopy Source Destination1 files
robocopy Source Destination2 files
You will also need to share the folder you want to copy to with the user on your machine.
I had an application load balancer (alb), so I had to build on #FredricHenri's answer
EC2_PUBLIC_IPS=`aws elbv2 --profile mfa describe-load-balancers --names c360-infra-everest-dev-lb --query 'LoadBalancers[].LoadBalancerArn' --output text | xargs -n 1 -I {} aws elbv2 --profile mfa describe-target-groups --load-balancer-arn {} --query 'TargetGroups[].TargetGroupArn' --output text | xargs -n 1 -I {} aws elbv2 --profile mfa describe-target-health --target-group-arn {} --query 'TargetHealthDescriptions[*].Target.Id' --output text | xargs -n 1 -I {} aws ec2 --profile mfa describe-instances --instance-id {} --query 'Reservations[].Instances[].PublicIpAddress' --output text`
echo $EC2_PUBLIC_IPS
echo ${EC2_PUBLIC_IPS} | xargs -n 1 -I {} scp -i ${EC2_SSH_KEY_FILE} ../swateek.txt ubuntu#{}:/home/ubuntu/
Points to Note
I have used an AWS profile called "MFA", this is optional
The other environment variables EC2_SSH_KEY_FILE is the name of the .pem file used to access the EC2 instance.
I've set up a small EMR cluster with Hive/Presto installed, I want to query files on S3 and import them to Postgres on RDS.
To run queries on S3 and save the results in a table in postgres I've done the following:
Started a 3 node EMR cluster from the AWS console.
Manually SSH into the Master node to create an EXTERNAL table in hive, looking at an S3 bucket.
Manually SSH into each of the 3 nodes and add a new catalog file:
/etc/presto/conf.dist/catalog/postgres.properties
with the following contents
connector.name=postgresql
connection-url=jdbc:postgresql://ip-to-postgres:5432/database
connection-user=<user>
connection-password=<pass>
and edited this file
/etc/presto/conf.dist/config.properties
adding
datasources=postgresql,hive
Restart presto by running the following manually on all 3 nodes
sudo restart presto-server
This setup seems to work well.
In my application, there are multiple databases created dynamically. It seems that those configuration/catalog changes need to be made for each database and the server needs to be restarted to see the new config changes.
Is there a proper way for my application (using boto or other methods) to update configurations by
Adding a new catalog file in all nodes /etc/presto/conf.dist/catalog/ for each new database
Adding a new entry in all nodes in /etc/presto/conf.dist/config.properties
Gracefully restarting presto across the whole cluster (ideally when it becomes idle, but that's not a major concern.
I believe you can run a simple bash script to achieve what you want. There is no other way except creating a new cluster with --configurations parameter where you provide the desired configurations. You can run below script from the master node.
#!/bin/sh
# "cluster_nodes.txt" with private IP address of each node.
aws emr list-instances --cluster-id <cluster-id> --instance-states RUNNING | grep PrivateIpAddress | sed 's/"PrivateIpAddress"://' | sed 's/\"//g' | awk '{gsub(/^[ \t]+|[ \t]+$/,""); print;}' > cluster_nodes.txt
# For each IP connect with ssh and configure.
while IFS='' read -r line || [[ -n "$line" ]]; do
echo "Connecting $line"
scp -i <PEM file> postgres.properties hadoop#$line:/tmp;
ssh -i <PEM file> hadoop#$line "sudo mv /tmp/postgres.properties /etc/presto/conf/catalog;sudo chown presto:presto /etc/presto/conf/catalog/postgres.properties;sudo chmod 644 /etc/presto/conf/catalog/postgres.properties; sudo restart presto-server";
done < cluster_nodes.txt
During Provision of your cluster:
You can provide the configuration details at the time of provision.
Refer to Presto Connector Configuration on how to add this automatically during the provision of your cluster.
You can provide the configuration via the management console as follows:
Or you can use the awscli to pass those configurations as follows:
#!/bin/bash
JSON=`cat <<JSON
[
{ "Classification": "presto-connector-postgresql",
"Properties": {
"connection-url": "jdbc:postgresql://ip-to-postgres:5432/database",
"connection-user": "<user>",
"connection-password": "<password>"
},
"Configurations": []
}
]
JSON`
aws emr create-cluster --configurations "$JSON" # ... reset of params