I am trying to assume an AWS role within a CI/CD pipeline, hence I have to write a script to change the role via a script. Below is the script to do that, and I used source <script>.sh to replace the existing AWS access & secret keys, and add the session key.
I checked that the 3 env variables are there by echoing them in the terminal.
#!/bin/bash
output="/tmp/assume-role-output.json"
aws sts assume-role --role-arn "arn:aws:iam::<account-id>:role/<rolename>" --role-session-name AWSCLI-Session > $output
AccessKeyId=$(cat $output | jq '.Credentials''.AccessKeyId')
SecretAccessKey=$(cat $output | jq '.Credentials''.SecretAccessKey')
SessionToken=$(cat $output | jq '.Credentials''.SessionToken')
export AWS_ACCESS_KEY_ID=$AccessKeyId
export AWS_SECRET_ACCESS_KEY=$SecretAccessKey
export AWS_SESSION_TOKEN=$SessionToken
However, when I tried running a simple aws command to list ECR images aws ecr list-images --registry-id <id> --repository-name <name>, it gave the following error message.
An error occurred (UnrecognizedClientException) when calling the ListImages operation:
The security token included in the request is invalid.
I tried manually setting the AWS keys and token in the terminal, and surprisingly the ecr list command works.
export AWS_ACCESS_KEY_ID="XXX"
export AWS_SECRET_ACCESS_KEY="XXX"
export AWS_SESSION_TOKEN="XXX"
Does anyone know what is wrong with my script?
This is a one-liner without using a file..
OUT=$(aws sts assume-role --role-arn arn:aws:iam::<YOUR_ACCOUNT>:role/<YOUR_ROLENAME> --role-session-name aaa);\
export AWS_ACCESS_KEY_ID=$(echo $OUT | jq -r '.Credentials''.AccessKeyId');\
export AWS_SECRET_ACCESS_KEY=$(echo $OUT | jq -r '.Credentials''.SecretAccessKey');\
export AWS_SESSION_TOKEN=$(echo $OUT | jq -r '.Credentials''.SessionToken');
Might be useful..
Print it to use as bash export on another terminal
printf "export AWS_ACCESS_KEY_ID=\"%s\"\\n" $AWS_ACCESS_KEY_ID;\
printf "export AWS_SECRET_ACCESS_KEY=\"%s\"\\n" $AWS_SECRET_ACCESS_KEY;\
printf "export AWS_SESSION_TOKEN=\"%s\"\\n\\n\\n" $AWS_SESSION_TOKEN;
Print it to use in JSON context
Useful for launch.json on vs code
printf "\"AWS_ACCESS_KEY_ID\":\"$AWS_ACCESS_KEY_ID\",\\n";\
printf "\"AWS_SECRET_ACCESS_KEY\":\"$AWS_SECRET_ACCESS_KEY\",\\n";\
printf "\"AWS_SESSION_TOKEN\":\"$AWS_SESSION_TOKEN\"\\n";
Update
Here is the powershell version
$OUT = aws sts assume-role --role-arn arn:aws:iam::<YOUR_ACCOUNT>:role/<YOUR_ROLENAME> --role-session-name aaa
$JSON_OUT = ConvertFrom-Json "$OUT"
$ACCESS_KEY=$JSON_OUT.Credentials.AccessKeyId
$SECRET_KEY=$JSON_OUT.Credentials.SecretAccessKey
$SESSION_TOKEN=$JSON_OUT.Credentials.SessionToken
"Paste these env variables to your terminal to assume the role"
-join ("`n", '$Env:AWS_ACCESS_KEY_ID="', "$ACCESS_KEY", '"')
-join ('$Env:AWS_SECRET_ACCESS_KEY="', "$SECRET_KEY", '"')
-join ('$Env:AWS_SESSION_TOKEN="', "$SESSION_TOKEN", '"')
If you use jq the way you do, your export values will contain quotation marks, e.g.
"ASIASZHPM3IXQXXOXFOY"
rather then:
ASIASZHPM3IXQXXOXFOY
To avoid this, you have to add -r flag to jq:
AccessKeyId=$(cat $output | jq -r '.Credentials''.AccessKeyId')
SecretAccessKey=$(cat $output | jq -r '.Credentials''.SecretAccessKey')
SessionToken=$(cat $output | jq -r '.Credentials''.SessionToken')
Adding to #carmel's answer
Here's a function that you can add to bashrc/zshrc to set the enviroment variables automatically.
function assume-role() {
OUT=$(aws sts assume-role --role-arn $1 --role-session-name $2);\
export AWS_ACCESS_KEY_ID=$(echo $OUT | jq -r '.Credentials''.AccessKeyId');\
export AWS_SECRET_ACCESS_KEY=$(echo $OUT | jq -r '.Credentials''.SecretAccessKey');\
export AWS_SESSION_TOKEN=$(echo $OUT | jq -r '.Credentials''.SessionToken');
}
This can then be used as:
$ assume-role <role-arn> <role-session-name>
Adding the answer I needed, because I did not have jq available in the container I was using, in case it helps someone out.
You can also use cut to parse the output:
OUT=$(aws sts assume-role --role-arn "arn:aws:iam::<account-id>:role/<role-name>" --role-session-name <session-name>)
export AWS_ACCESS_KEY_ID=$(echo $OUT | cut -d '"' -f 6 )
export AWS_SECRET_ACCESS_KEY=$(echo $OUT | cut -d '"' -f 10 )
export AWS_SESSION_TOKEN=$(echo $OUT | cut -d '"' -f 14 )
Single command to get the result
eval $(aws sts assume-role --role-arn arn:aws:iam::${AWSAccountId}:role/role-name --role-session-name awscli-session | jq -r '.Credentials | "export AWS_ACCESS_KEY_ID=\(.AccessKeyId)\nexport AWS_SECRET_ACCESS_KEY=\(.SecretAccessKey)\nexport AWS_SESSION_TOKEN=\(.SessionToken)\n"')
Related
I am trying to read credentials from assume role like AcccessKeyID and store in a variable but getting error:
My code and error is:
jq -r '".Credentials.AccessKeyId"' mysession.json | awk '"{print "set","AWS_ACCESS_KEY_ID="$0}"' > variables
jq: error: syntax error, unexpected INVALID_CHARACTER, expecting $end (Windows cmd shell quoting issues?) at , line 1:
'".Credentials.AccessKeyId"'
jq: 1 compile error
awk: '"{print
awk: ^ invalid char ''' in expression
Please suggest me how to achieve this activity in windows CMD .I have installed jq and awk in windows.
aws sts assume-role --role-arn role_arn --role-session-name session_name > mysession.json
$ak = jq -r ".Credentials.AccessKeyId" mysession.json
$sk = jq -r ".Credentials.SecretAccessKey" mysession.json
$tk = jq -r ".Credentials.SessionToken" mysession.json
Write-Host "Acccess Key ID:" $ak
Write-Host "Secret Acccess Key:" $sk
Write-Host "Session Token:" $tk
Powershell
$source_profile = "default"
$region = "ap-southeast-2"
$role_arn = "arn:aws:iam::account_id:role/role-test"
$target_profile = "test"
$target_profile_path = "$HOME\.aws\credentials"
$session_name = "test"
# Assume Role
$Response = (Use-STSRole -Region $region -RoleArn $role_arn -RoleSessionName $session_name -ProfileName $source_profile).Credentials
# Export Crendentail as environment variable
$env:AWS_ACCESS_KEY_ID=$Response.AccessKeyId
$env:AWS_SECRET_ACCESS_KEY=$Response.SecretAccessKey
$env:AWS_SESSION_TOKEN=$Response.SessionToken
# Create Profile with Credentials
Set-AWSCredential -StoreAs $target_profile -ProfileLocation $target_profile_path -AccessKey $Response.AccessKeyId -SecretKey $Response.SecretAccessKey -SessionToken $Response.SessionToken
# Print expiration time
Write-Host("Credentials will expire at: " + $Response.Expiration)
AWS Assume Role Script
How can I parse an assumed role's credentials in powershell and set them as a variable in a script?
On the jq site it mentions syntax adjustments for Windows:
"when using the Windows command shell (cmd.exe) it's best to use
double quotes around your jq program when given on the command-line
(instead of the -f program-file option), but then double-quotes in the
jq program need backslash escaping."
So, instead of
jq -r '".Credentials.AccessKeyId"' mysession.json
You'll need to escape double quotes, then change single quotes to double.
jq -r "\".Credentials.AccessKeyId\"" mysession.json
In order to delete a log stream from a log group using the CLI command , individual log stream names are required .
Is there a way to delete all log streams belonging to a log group using a single command?
You can achieve this through using --query to target the results of describe-log-streams. This allows you to loop through and delete the results.
aws logs describe-log-streams --log-group-name $LOG_GROUP_NAME --query 'logStreams[*].logStreamName' --output table | awk '{print $2}' | grep -v ^$ | while read x; do aws logs delete-log-stream --log-group-name $LOG_GROUP_NAME --log-stream-name $x; done
You can use --query to target all or specific groups or streams.
Delete streams from a specific month
aws logs describe-log-streams --log-group-name $LOG_GROUP --query 'logStreams[?starts_with(logStreamName,`2017/07`)].logStreamName' --output table | awk '{print $2}' | grep -v ^$ | while read x; do aws logs delete-log-stream --log-group-name $LOG_GROUP --log-stream-name $x; done
Delete All log groups - Warning, it deletes EVERYTHING!
aws logs describe-log-groups --query 'logGroups[*].logGroupName' --output table | awk '{print $2}' | grep -v ^$ | while read x; do aws logs delete-log-group --log-group-name $x; done
Clearing specific log groups
aws logs describe-log-groups --query 'logGroups[?starts_with(logGroupName,`$LOG_GROUP_NAME`)].logGroupName' --output table | awk '{print $2}' | grep -v ^$ | while read x; do aws logs delete-log-group --log-group-name $x; done
Credit
Implemented script with command from #Stephen's answer. The script shows summary before deletion and tracks progress of deletion.
#!/usr/bin/env bash
LOG_GROUP_NAME=${1:?log group name is not set}
echo Getting stream names...
LOG_STREAMS=$(
aws logs describe-log-streams \
--log-group-name ${LOG_GROUP_NAME} \
--query 'logStreams[*].logStreamName' \
--output table |
awk '{print $2}' |
grep -v ^$ |
grep -v DescribeLogStreams
)
echo These streams will be deleted:
printf "${LOG_STREAMS}\n"
echo Total $(wc -l <<<"${LOG_STREAMS}") streams
echo
while true; do
read -p "Prceed? " yn
case $yn in
[Yy]*) break ;;
[Nn]*) exit ;;
*) echo "Please answer yes or no." ;;
esac
done
for name in ${LOG_STREAMS}; do
printf "Delete stream ${name}... "
aws logs delete-log-stream --log-group-name ${LOG_GROUP_NAME} --log-stream-name ${name} && echo OK || echo Fail
done
Github link
To delete all log streams associated with a specific log group, run the following command, replacing NAME_OF_LOG_GROUP with your group:
aws logs describe-log-streams --log-group-name NAME_OF_LOG_GROUP --output text | awk '{print $7}' | while read x;
do aws logs delete-log-stream --log-group-name NAME_OF_LOG_GROUP --log-stream-name $x
done
Here is Script to delete all logs in a log group using python. Just change the logGroupName to match your logGroup.
import boto3
client = boto3.client('logs')
response = client.describe_log_streams(
logGroupName='/aws/batch/job'
)
def delete_stream(stream):
delete_response = client.delete_log_stream(
logGroupName='/aws/batch/job',
logStreamName=stream['logStreamName']
)
print(delete_response)
results = map(lambda x: delete_stream(x), response['logStreams'])
Based on #german-lashevich's answer
If you have thousands of log streams, you will needed to parallelize.
#!/usr/bin/env bash
LOG_GROUP_NAME=${1:?log group name is not set}
echo Getting stream names...
LOG_STREAMS=$(
aws logs describe-log-streams \
--log-group-name ${LOG_GROUP_NAME} \
--query 'logStreams[*].logStreamName' \
--output table |
awk '{print $2}' |
grep -v ^$ |
grep -v DescribeLogStreams
)
echo These streams will be deleted:
printf "${LOG_STREAMS}\n"
echo Total $(wc -l <<<"${LOG_STREAMS}") streams
echo
while true; do
read -p "Prceed? " yn
case $yn in
[Yy]*) break ;;
[Nn]*) exit ;;
*) echo "Please answer yes or no." ;;
esac
done
step() {
local name=$1
printf "Delete stream ${name}... "
aws logs delete-log-stream --log-group-name ${LOG_GROUP_NAME} --log-stream-name ${name} && echo OK || echo Fail
}
N=20
for name in ${LOG_STREAMS}; do ((i=i%N)); ((i++==0)) && wait ; step "$name" & done
This cannot be done using a single aws Cli command. Hence we achieved this using a script where we first retrieved all the log streams of a log group and then deleted them in a loop.
For Windows users this powershell script could be usefull, to remove all the log streams in a log group:
#Set your log group name
$log_group_name = "/production/log-group-name"
aws logs describe-log-streams --log-group-name $log_group_name --query logStreams --output json | ConvertFrom-json | ForEach-Object {$_.logStreamName} | ForEach-Object {
aws logs delete-log-stream --log-group-name $log_group_name --log-stream-name $_
Write-Host ($_ + " -> deleted") -ForegroundColor Green
}
Just save it as your_script_name.ps1 and execute it in powershell.
An alternative version using Powershell CLI on Windows, launch powershell command line and use:
$LOG_GROUP_NAME="cloud-watch-group-name";
$LOG_STREAM_NAMEP="cloud-watch-log-stream-name";
Set-DefaultAWSRegion -Region us-your-regions;
Set-AWSCredential -AccessKey ACCESSKEYEXAMPLE -SecretKey sEcReTKey/EXamPLE/xxxddddEXAMPLEKEY -StoreAs MyProfileName
Get-CWLLogStream -loggroupname $LOG_GROUP_NAME -logstreamnameprefix $LOG_GROUP_NAMEP | Remove-CWLLogStream -LogGroupName $LOG_GROUP_NAME;
You may use -Force parameter on the Remove-CWLogStream Cmdlet in case you donĀ“t want to confirm one by one.
References
https://docs.aws.amazon.com/powershell/latest/reference/Index.html
The others have already described how you can paginate through all the log streams and delete them one by one.
I would like to offer two alternative ways that have (more or less) the same effect, but don't require you to loop through all the log streams.
Deleting the log group, then re-creating it has the desired effect: All the log streams of the log group will be deleted.
delete-log-group
followed by:
create-log-group
CAVEAT: Deleting a log group can have unintended consequences. For example, subscriptions and the retention policy will be deleted as well, and those have to be restored too when the log group is re-created.
Another workaround is to set a 1 day retention period.
put-retention-policy
It won't have an immediate effect, you will have to wait ca. a day, but after that all the old data will be deleted. The name of the old streams and their meta data (last event time, creation time, etc.) will remain though, but you won't be charged for that (as far as I can tell based on my own bill).
So it is not exactly what you asked for. However, probably the most important reason why one would want to delete all the log streams is to delete the logged data (to reduce costs, or for compliance reasons), and this approach achieves that.
WARNING: Don't forget to change the retention policy after the old data is gone, or you will continually delete data after 1 day, and chances are, it is not what you want in the long run.
If you are doing this in zshell /zsh and you only need simple one liner command then just update the values :
* Pattern
AWS_SECRET_ACCESS_KEY
AWS_ACCESS_KEY_ID
AWS_DEFAULT_REGION
Pattern can any text , you can also add ^ for begging of the line or $ for end of the line.
run the below command !
Pattern="YOUR_PATTERN" && setupKeys="AWS_ACCESS_KEY_ID=YOUR_KEY AWS_SECRET_ACCESS_KEY=YOUR_KEY AWS_DEFAULT_REGION=YOUR_REGION" &&
eval "${setupKeys} aws logs describe-log-groups --query 'logGroups[*].logGroupName' --output table | sed 's/|//g'| sed 's/\s//g'| grep -i ${Pattern} "| while read x; do echo "deleting $x" && $setupKeys aws logs delete-log-group --log-group-name $x; done
--log-group-name is not optional in aws cli, you can try using an * for --log-group-name value (in test environment)
aws logs delete-log-group --log-group-name my-logs
Reference URL:
http://docs.aws.amazon.com/cli/latest/reference/logs/delete-log-group.html
If you are using a prefix, you could use the following command.
aws logs describe-log-streams --log-group-name <log_group_name> --log-stream-name-prefix"<give_a_log_group_prefix>" --query 'logStreams[*].logStreamName' --output table | awk '{print $2}' | grep -v ^$ | while read x; do aws logs delete-log-stream --log-group-name <log_group_name> --log- stream-name $x;done;
I am trying to figure out on what the s3cmd command would be to download files from bucket by date, so for example i have a bucket named "test" and in that bucket there are different files from different dates. I am trying to get the files that were uploaded yesterday. what would the command be?
There is no single command that will allow you to do that. You have to write a script some thing like this. Or use a SDK that allows you to do this. Below script is a sample script that will get S3 files from last 30 days.
#!/bin/bash
# Usage: ./getOld "bucketname" "30 days"
s3cmd ls s3://$1 | while read -r line; do
createDate=`echo $line|awk {'print $1" "$2'}`
createDate=`date -d"$createDate" +%s`
olderThan=`date -d"-$2" +%s`
if [[ $createDate -lt $olderThan ]]
then
fileName=`echo $line|awk {'print $4'}`
echo $fileName
if [[ $fileName != "" ]]
then
s3cmd get "$fileName"
fi
fi
done;
I like s3cmd but to work with single line command, I prefer the JSon output of aws cli and jq JSon processor
The command will look like
aws s3api list-objects --bucket "yourbucket" |\
jq '.Contents[] | select(.LastModified | startswith("yourdate")).Key' --raw-output |\
xargs -I {} aws s3 cp s3://yourbucket/{} .
basically what the script does
list all object from a given bucket
(the interesting part) jq will parse the Contents array and select element where the LastModified value start with your pattern (you will need to change), get the Key of the s3 object and add --raw-output so it strips the quote from the value
pass the result to an aws copy command to download the file from s3
if you want to automate a bit further you can get yesterday from the command line
for mac os
$ export YESTERDAY=`date -v-1w +%F`
$ aws s3api list-objects --bucket "ariba-install" |\
jq '.Contents[] | select(.LastModified | startswith('\"$YESTERDAY\"')).Key' --raw-output |\
xargs -I {} aws s3 cp s3://ariba-install/{} .
for linux os (or other flavor of bash that I am not familiar)
$ export YESTERDAY=`date -d "1 day ago" '+%Y-%m-%d' `
$ aws s3api list-objects --bucket "ariba-install" |\
jq '.Contents[] | select(.LastModified | startswith('\"$YESTERDAY\"')).Key' --raw-output |\
xargs -I {} aws s3 cp s3://ariba-install/{} .
Now you get the idea if you want to change the YESTERDAY variable to have different kind of date
I have a bucket (version enabled), how can i get back the objects that are accidentally permanent deleted from my bucket.
I have created a script to restore the objects with deletemarker. You'll have to input it like below:
sh Undelete_deletemarker.sh bucketname path/to/certain/folder
**Script:**
#!/bin/bash
#please provide the bucketname and path to destination folder to restore
# Remove all versions and delete markers for each object
aws s3api list-object-versions --bucket $1 --prefix $2 --output text |
grep "DELETEMARKERS" | while read obj
do
KEY=$( echo $obj| awk '{print $3}')
VERSION_ID=$( echo $obj | awk '{print $5}')
echo $KEY
echo $VERSION_ID
aws s3api delete-object --bucket $1 --key $KEY --version-id
$VERSION_ID
done
Happy Coding! ;)
Thank you, Kc Bickey, this script works wonderfully! Only thing I might add for others is to make sure " $VERSION_ID" immediately follows "--version-id" on line 12. The forum seems to have wrapped " $VERSION_ID" to the next line and it causes the script to error until that's corrected.
**Script:**
#!/bin/bash
#please provide the bucketname and path to destination folder to restore
# Remove all versions and delete markers for each object
aws s3api list-object-versions --bucket $1 --prefix $2 --output text |
grep "DELETEMARKERS" | while read obj
do
KEY=$( echo $obj| awk '{print $3}')
VERSION_ID=$( echo $obj | awk '{print $5}')
echo $KEY
echo $VERSION_ID
aws s3api delete-object --bucket $1 --key $KEY --version-id $VERSION_ID
done
with bucket versioning enable to permanently delete an object you need to specifically mention the version of the object DELETE Object versionId
If you've done so you cannot recover this specific version, you get access to previous version
When versioning is enabled, a simple DELETE cannot permanently delete an object. Instead, Amazon S3 inserts a delete marker in the bucket so you can recover from this specific marker, but if the marker is deleted (and you mention it was permanent deleted) you cannot recover
did you enable Cross-Region Replication ? If so you can retrieve the object in the other region:
If a DELETE request specifies a particular object version ID to delete, Amazon S3 will delete that object version in the source bucket, but it will not replicate the deletion in the destination bucket (in other words, it will not delete the same object version from the destination bucket). This behavior protects data from malicious deletions.
Edit: If you have versioning enabled on your bucket you should get the Versions Hide/Show toggle button and when Show is selected you should have the additional Version ID column as per the screenshot from my bucket
If your bucket objects has white spaces in filename, previous scripts may not work properly. This script take the key including white spaces.
#!/bin/bash
#please provide the bucketname and path to destination folder to restore
# Remove all versions and delete markers for each object
aws s3api list-object-versions --bucket $1 --prefix $2 --output text |
grep "DELETEMARKERS" | while read obj
do
KEY=$( echo $obj| awk '{indice=index($0,$(NF-1))-index($0,$3);print substr($0, index($0,$3), indice-1)}')
VERSION_ID=$( echo $obj | awk '{print $NF}')
echo $KEY
echo $VERSION_ID
aws s3api delete-object --bucket $1 --key "$KEY" --version-id $VERSION_ID
done
This version of the script worked really well for me. I have a bucket that has a directory with 180,000 items in it, and this one chews through them and restores all the files that are in a directory/folder that is within the bucket.
If you just need to restore all the items in a bucket that don't have a directory, then you can just drop the prefix parameter.
#!/bin/bash
BUCKET=mybucketname
DIRECTORY=myfoldername
function run() {
aws s3api list-object-versions --bucket ${BUCKET_NAME} --prefix="${DIRECTORY}" --query='{Objects: DeleteMarkers[].{Key:Key}}' --output text |
while read KEY
do
if [[ "$KEY" == "None" ]]; then
continue
else
KEY=$(echo ${KEY} | awk '{$1=""; print $0}' | sed "s/^ *//g")
VERSION=$(aws s3api list-object-versions --bucket ${BUCKET_NAME} --prefix="$KEY" --query='{Objects: DeleteMarkers[].{VersionId:VersionId}}' --output text | awk '{$1=""; print $0}' | sed "s/^ *//g")
echo ${KEY}
echo ${VERSION}
fi
aws s3api delete-object --bucket ${BUCKET_NAME} --key="${KEY}" --version-id ${VERSION}
done
}
Note, running this script two times will run, but it won't work. It will just return the same record in the second script, so it doesn't really do anything. If you had a massive bucket, I might setup 3-4 scripts that filter by files that start with a certain letter/number. At least this way you can start working on files deeper down in the bucket.
I would like to export a DNS zonefile from my Amazon Route 53 setup. Is this possible, or can zonefiles only be created manually? (e.g. through http://www.zonefile.org/?lang=en)
The following script exports zone details in bind format from Route53. Pass over the domain name as a parameter to script. (This required awscli and jq to be installed and configured.)
#!/bin/bash
zonename=$1
hostedzoneid=$(aws route53 list-hosted-zones --output json | jq -r ".HostedZones[] | select(.Name == \"$zonename.\") | .Id" | cut -d'/' -f3)
aws route53 list-resource-record-sets --hosted-zone-id $hostedzoneid --output json | jq -jr '.ResourceRecordSets[] | "\(.Name) \t\(.TTL) \t\(.Type) \t\(.ResourceRecords[]?.Value)\n"'
It's not possible yet. You'll have to use the API's ListResourceRecordSets and build the zonefile yourself.
As stated in the comment, the cli53 is a great tool to interact with Route 53 using the command line interface.
First, configure your account keys in ~/.aws/config file:
[default]
aws_access_key_id = AK.....ZP
aws_secret_access_key = 8j.....M0
Then, use the export command:
$ cli53 export --full --debug example.com > example.com.zone 2> example.com.zone.log
Verify the example.com.zone file after export to make sure that everything is exported correctly.
You can import the zone lately:
$ cli53 import --file ./example.com.zone example.com
And if you want to transfer the Route53 zone from one AWS account to another, you can use the profile option. Just add two named accounts to the ~/.aws/config file and reference them with the profile property during export and import. You can even pipe these two commands.
You can export a JSON file:
aws route53 list-resource-record-sets --hosted-zone-id <zone-id-here> --output json > route53-records.json
You can export with aws api
aws route53 list-resource-record-sets --hosted-zone-id YOUR_ZONE_ID
Exporting and importing is possible with https://github.com/RisingOak/route53-transfer
Based on #szentmarjay's answer above, except it shows usage and supports zone_id or zone_name. This is my fave because it's standard old school bind format, so other tools can do stuff with it.
#!/bin/bash
# r53_export
usage() {
local cmd=$(basename "$0")
echo -e >&2 "\nUsage: $cmd {--id ZONE_ID|--domain ZONE_NAME}\n"
exit 1
}
while [[ $1 ]]; do
if [[ $1 == --id ]]; then shift; zone_id="$1"
elif [[ $1 == --domain ]]; then shift; zone_name="$1"
else usage
fi
shift
done
if [[ $zone_name ]]; then
zone_id=$(
aws route53 list-hosted-zones --output json \
| jq -r ".HostedZones[] | select(.Name == \"$zone_name.\") | .Id" \
| head -n1 \
| cut -d/ -f3
)
echo >&2 "+ Found zone id: '$zone_id'"
fi
[[ $zone_id ]] || usage
aws route53 list-resource-record-sets --hosted-zone-id $zone_id --output json \
| jq -jr '.ResourceRecordSets[] | "\(.Name) \t\(.TTL) \t\(.Type) \t\(.ResourceRecords[]?.Value)\n"'