I would like to export a DNS zonefile from my Amazon Route 53 setup. Is this possible, or can zonefiles only be created manually? (e.g. through http://www.zonefile.org/?lang=en)
The following script exports zone details in bind format from Route53. Pass over the domain name as a parameter to script. (This required awscli and jq to be installed and configured.)
#!/bin/bash
zonename=$1
hostedzoneid=$(aws route53 list-hosted-zones --output json | jq -r ".HostedZones[] | select(.Name == \"$zonename.\") | .Id" | cut -d'/' -f3)
aws route53 list-resource-record-sets --hosted-zone-id $hostedzoneid --output json | jq -jr '.ResourceRecordSets[] | "\(.Name) \t\(.TTL) \t\(.Type) \t\(.ResourceRecords[]?.Value)\n"'
It's not possible yet. You'll have to use the API's ListResourceRecordSets and build the zonefile yourself.
As stated in the comment, the cli53 is a great tool to interact with Route 53 using the command line interface.
First, configure your account keys in ~/.aws/config file:
[default]
aws_access_key_id = AK.....ZP
aws_secret_access_key = 8j.....M0
Then, use the export command:
$ cli53 export --full --debug example.com > example.com.zone 2> example.com.zone.log
Verify the example.com.zone file after export to make sure that everything is exported correctly.
You can import the zone lately:
$ cli53 import --file ./example.com.zone example.com
And if you want to transfer the Route53 zone from one AWS account to another, you can use the profile option. Just add two named accounts to the ~/.aws/config file and reference them with the profile property during export and import. You can even pipe these two commands.
You can export a JSON file:
aws route53 list-resource-record-sets --hosted-zone-id <zone-id-here> --output json > route53-records.json
You can export with aws api
aws route53 list-resource-record-sets --hosted-zone-id YOUR_ZONE_ID
Exporting and importing is possible with https://github.com/RisingOak/route53-transfer
Based on #szentmarjay's answer above, except it shows usage and supports zone_id or zone_name. This is my fave because it's standard old school bind format, so other tools can do stuff with it.
#!/bin/bash
# r53_export
usage() {
local cmd=$(basename "$0")
echo -e >&2 "\nUsage: $cmd {--id ZONE_ID|--domain ZONE_NAME}\n"
exit 1
}
while [[ $1 ]]; do
if [[ $1 == --id ]]; then shift; zone_id="$1"
elif [[ $1 == --domain ]]; then shift; zone_name="$1"
else usage
fi
shift
done
if [[ $zone_name ]]; then
zone_id=$(
aws route53 list-hosted-zones --output json \
| jq -r ".HostedZones[] | select(.Name == \"$zone_name.\") | .Id" \
| head -n1 \
| cut -d/ -f3
)
echo >&2 "+ Found zone id: '$zone_id'"
fi
[[ $zone_id ]] || usage
aws route53 list-resource-record-sets --hosted-zone-id $zone_id --output json \
| jq -jr '.ResourceRecordSets[] | "\(.Name) \t\(.TTL) \t\(.Type) \t\(.ResourceRecords[]?.Value)\n"'
Related
I'd like to list all objects in WAF that do not have resources connected to them using the aws cli in my terminal.
Is there anyway I can do this using the aws wafv2 list-web-acl --name --scope <value> AWS cli command with other perimeters?
Thanks
Looks like there's no cmd for that so I created a script to have the results placed in a file. Might come handy if needed by anyone on here
#!/bin/bash
#list the web acl objects with their corresponding arn and save it in a file
aws wafv2 list-web-acls --scope REGIONAL | grep "ARN" > output.txt
# Next generate only the ARN nos and save output in a seperate file
awk -F\" '{print $4}' output.txt > input.txt
#Create a file to store ARN numbers together with their resources attached
touch resources.txt
#loop through each line and generate the resource attached to an ARN object based on its ARN no
while read p; do
echo $p >> resources.txt && \
aws wafv2 list-resources-for-web-acl --web-acl-arn $p >> resources.txt && \
echo ------------------------ >> resources.txt
#echo -e ' \t ' >> resources.txt
done < input.txt
#remove unwanted files
rm input.txt output.txt
#list webacl objects that do not have resources attached to them
grep -B 3 "\[\]" resources.txt | grep "webacl"
#remove any files left
rm resources.txt
I am trying to assume an AWS role within a CI/CD pipeline, hence I have to write a script to change the role via a script. Below is the script to do that, and I used source <script>.sh to replace the existing AWS access & secret keys, and add the session key.
I checked that the 3 env variables are there by echoing them in the terminal.
#!/bin/bash
output="/tmp/assume-role-output.json"
aws sts assume-role --role-arn "arn:aws:iam::<account-id>:role/<rolename>" --role-session-name AWSCLI-Session > $output
AccessKeyId=$(cat $output | jq '.Credentials''.AccessKeyId')
SecretAccessKey=$(cat $output | jq '.Credentials''.SecretAccessKey')
SessionToken=$(cat $output | jq '.Credentials''.SessionToken')
export AWS_ACCESS_KEY_ID=$AccessKeyId
export AWS_SECRET_ACCESS_KEY=$SecretAccessKey
export AWS_SESSION_TOKEN=$SessionToken
However, when I tried running a simple aws command to list ECR images aws ecr list-images --registry-id <id> --repository-name <name>, it gave the following error message.
An error occurred (UnrecognizedClientException) when calling the ListImages operation:
The security token included in the request is invalid.
I tried manually setting the AWS keys and token in the terminal, and surprisingly the ecr list command works.
export AWS_ACCESS_KEY_ID="XXX"
export AWS_SECRET_ACCESS_KEY="XXX"
export AWS_SESSION_TOKEN="XXX"
Does anyone know what is wrong with my script?
This is a one-liner without using a file..
OUT=$(aws sts assume-role --role-arn arn:aws:iam::<YOUR_ACCOUNT>:role/<YOUR_ROLENAME> --role-session-name aaa);\
export AWS_ACCESS_KEY_ID=$(echo $OUT | jq -r '.Credentials''.AccessKeyId');\
export AWS_SECRET_ACCESS_KEY=$(echo $OUT | jq -r '.Credentials''.SecretAccessKey');\
export AWS_SESSION_TOKEN=$(echo $OUT | jq -r '.Credentials''.SessionToken');
Might be useful..
Print it to use as bash export on another terminal
printf "export AWS_ACCESS_KEY_ID=\"%s\"\\n" $AWS_ACCESS_KEY_ID;\
printf "export AWS_SECRET_ACCESS_KEY=\"%s\"\\n" $AWS_SECRET_ACCESS_KEY;\
printf "export AWS_SESSION_TOKEN=\"%s\"\\n\\n\\n" $AWS_SESSION_TOKEN;
Print it to use in JSON context
Useful for launch.json on vs code
printf "\"AWS_ACCESS_KEY_ID\":\"$AWS_ACCESS_KEY_ID\",\\n";\
printf "\"AWS_SECRET_ACCESS_KEY\":\"$AWS_SECRET_ACCESS_KEY\",\\n";\
printf "\"AWS_SESSION_TOKEN\":\"$AWS_SESSION_TOKEN\"\\n";
Update
Here is the powershell version
$OUT = aws sts assume-role --role-arn arn:aws:iam::<YOUR_ACCOUNT>:role/<YOUR_ROLENAME> --role-session-name aaa
$JSON_OUT = ConvertFrom-Json "$OUT"
$ACCESS_KEY=$JSON_OUT.Credentials.AccessKeyId
$SECRET_KEY=$JSON_OUT.Credentials.SecretAccessKey
$SESSION_TOKEN=$JSON_OUT.Credentials.SessionToken
"Paste these env variables to your terminal to assume the role"
-join ("`n", '$Env:AWS_ACCESS_KEY_ID="', "$ACCESS_KEY", '"')
-join ('$Env:AWS_SECRET_ACCESS_KEY="', "$SECRET_KEY", '"')
-join ('$Env:AWS_SESSION_TOKEN="', "$SESSION_TOKEN", '"')
If you use jq the way you do, your export values will contain quotation marks, e.g.
"ASIASZHPM3IXQXXOXFOY"
rather then:
ASIASZHPM3IXQXXOXFOY
To avoid this, you have to add -r flag to jq:
AccessKeyId=$(cat $output | jq -r '.Credentials''.AccessKeyId')
SecretAccessKey=$(cat $output | jq -r '.Credentials''.SecretAccessKey')
SessionToken=$(cat $output | jq -r '.Credentials''.SessionToken')
Adding to #carmel's answer
Here's a function that you can add to bashrc/zshrc to set the enviroment variables automatically.
function assume-role() {
OUT=$(aws sts assume-role --role-arn $1 --role-session-name $2);\
export AWS_ACCESS_KEY_ID=$(echo $OUT | jq -r '.Credentials''.AccessKeyId');\
export AWS_SECRET_ACCESS_KEY=$(echo $OUT | jq -r '.Credentials''.SecretAccessKey');\
export AWS_SESSION_TOKEN=$(echo $OUT | jq -r '.Credentials''.SessionToken');
}
This can then be used as:
$ assume-role <role-arn> <role-session-name>
Adding the answer I needed, because I did not have jq available in the container I was using, in case it helps someone out.
You can also use cut to parse the output:
OUT=$(aws sts assume-role --role-arn "arn:aws:iam::<account-id>:role/<role-name>" --role-session-name <session-name>)
export AWS_ACCESS_KEY_ID=$(echo $OUT | cut -d '"' -f 6 )
export AWS_SECRET_ACCESS_KEY=$(echo $OUT | cut -d '"' -f 10 )
export AWS_SESSION_TOKEN=$(echo $OUT | cut -d '"' -f 14 )
Single command to get the result
eval $(aws sts assume-role --role-arn arn:aws:iam::${AWSAccountId}:role/role-name --role-session-name awscli-session | jq -r '.Credentials | "export AWS_ACCESS_KEY_ID=\(.AccessKeyId)\nexport AWS_SECRET_ACCESS_KEY=\(.SecretAccessKey)\nexport AWS_SESSION_TOKEN=\(.SessionToken)\n"')
In order to delete a log stream from a log group using the CLI command , individual log stream names are required .
Is there a way to delete all log streams belonging to a log group using a single command?
You can achieve this through using --query to target the results of describe-log-streams. This allows you to loop through and delete the results.
aws logs describe-log-streams --log-group-name $LOG_GROUP_NAME --query 'logStreams[*].logStreamName' --output table | awk '{print $2}' | grep -v ^$ | while read x; do aws logs delete-log-stream --log-group-name $LOG_GROUP_NAME --log-stream-name $x; done
You can use --query to target all or specific groups or streams.
Delete streams from a specific month
aws logs describe-log-streams --log-group-name $LOG_GROUP --query 'logStreams[?starts_with(logStreamName,`2017/07`)].logStreamName' --output table | awk '{print $2}' | grep -v ^$ | while read x; do aws logs delete-log-stream --log-group-name $LOG_GROUP --log-stream-name $x; done
Delete All log groups - Warning, it deletes EVERYTHING!
aws logs describe-log-groups --query 'logGroups[*].logGroupName' --output table | awk '{print $2}' | grep -v ^$ | while read x; do aws logs delete-log-group --log-group-name $x; done
Clearing specific log groups
aws logs describe-log-groups --query 'logGroups[?starts_with(logGroupName,`$LOG_GROUP_NAME`)].logGroupName' --output table | awk '{print $2}' | grep -v ^$ | while read x; do aws logs delete-log-group --log-group-name $x; done
Credit
Implemented script with command from #Stephen's answer. The script shows summary before deletion and tracks progress of deletion.
#!/usr/bin/env bash
LOG_GROUP_NAME=${1:?log group name is not set}
echo Getting stream names...
LOG_STREAMS=$(
aws logs describe-log-streams \
--log-group-name ${LOG_GROUP_NAME} \
--query 'logStreams[*].logStreamName' \
--output table |
awk '{print $2}' |
grep -v ^$ |
grep -v DescribeLogStreams
)
echo These streams will be deleted:
printf "${LOG_STREAMS}\n"
echo Total $(wc -l <<<"${LOG_STREAMS}") streams
echo
while true; do
read -p "Prceed? " yn
case $yn in
[Yy]*) break ;;
[Nn]*) exit ;;
*) echo "Please answer yes or no." ;;
esac
done
for name in ${LOG_STREAMS}; do
printf "Delete stream ${name}... "
aws logs delete-log-stream --log-group-name ${LOG_GROUP_NAME} --log-stream-name ${name} && echo OK || echo Fail
done
Github link
To delete all log streams associated with a specific log group, run the following command, replacing NAME_OF_LOG_GROUP with your group:
aws logs describe-log-streams --log-group-name NAME_OF_LOG_GROUP --output text | awk '{print $7}' | while read x;
do aws logs delete-log-stream --log-group-name NAME_OF_LOG_GROUP --log-stream-name $x
done
Here is Script to delete all logs in a log group using python. Just change the logGroupName to match your logGroup.
import boto3
client = boto3.client('logs')
response = client.describe_log_streams(
logGroupName='/aws/batch/job'
)
def delete_stream(stream):
delete_response = client.delete_log_stream(
logGroupName='/aws/batch/job',
logStreamName=stream['logStreamName']
)
print(delete_response)
results = map(lambda x: delete_stream(x), response['logStreams'])
Based on #german-lashevich's answer
If you have thousands of log streams, you will needed to parallelize.
#!/usr/bin/env bash
LOG_GROUP_NAME=${1:?log group name is not set}
echo Getting stream names...
LOG_STREAMS=$(
aws logs describe-log-streams \
--log-group-name ${LOG_GROUP_NAME} \
--query 'logStreams[*].logStreamName' \
--output table |
awk '{print $2}' |
grep -v ^$ |
grep -v DescribeLogStreams
)
echo These streams will be deleted:
printf "${LOG_STREAMS}\n"
echo Total $(wc -l <<<"${LOG_STREAMS}") streams
echo
while true; do
read -p "Prceed? " yn
case $yn in
[Yy]*) break ;;
[Nn]*) exit ;;
*) echo "Please answer yes or no." ;;
esac
done
step() {
local name=$1
printf "Delete stream ${name}... "
aws logs delete-log-stream --log-group-name ${LOG_GROUP_NAME} --log-stream-name ${name} && echo OK || echo Fail
}
N=20
for name in ${LOG_STREAMS}; do ((i=i%N)); ((i++==0)) && wait ; step "$name" & done
This cannot be done using a single aws Cli command. Hence we achieved this using a script where we first retrieved all the log streams of a log group and then deleted them in a loop.
For Windows users this powershell script could be usefull, to remove all the log streams in a log group:
#Set your log group name
$log_group_name = "/production/log-group-name"
aws logs describe-log-streams --log-group-name $log_group_name --query logStreams --output json | ConvertFrom-json | ForEach-Object {$_.logStreamName} | ForEach-Object {
aws logs delete-log-stream --log-group-name $log_group_name --log-stream-name $_
Write-Host ($_ + " -> deleted") -ForegroundColor Green
}
Just save it as your_script_name.ps1 and execute it in powershell.
An alternative version using Powershell CLI on Windows, launch powershell command line and use:
$LOG_GROUP_NAME="cloud-watch-group-name";
$LOG_STREAM_NAMEP="cloud-watch-log-stream-name";
Set-DefaultAWSRegion -Region us-your-regions;
Set-AWSCredential -AccessKey ACCESSKEYEXAMPLE -SecretKey sEcReTKey/EXamPLE/xxxddddEXAMPLEKEY -StoreAs MyProfileName
Get-CWLLogStream -loggroupname $LOG_GROUP_NAME -logstreamnameprefix $LOG_GROUP_NAMEP | Remove-CWLLogStream -LogGroupName $LOG_GROUP_NAME;
You may use -Force parameter on the Remove-CWLogStream Cmdlet in case you donĀ“t want to confirm one by one.
References
https://docs.aws.amazon.com/powershell/latest/reference/Index.html
The others have already described how you can paginate through all the log streams and delete them one by one.
I would like to offer two alternative ways that have (more or less) the same effect, but don't require you to loop through all the log streams.
Deleting the log group, then re-creating it has the desired effect: All the log streams of the log group will be deleted.
delete-log-group
followed by:
create-log-group
CAVEAT: Deleting a log group can have unintended consequences. For example, subscriptions and the retention policy will be deleted as well, and those have to be restored too when the log group is re-created.
Another workaround is to set a 1 day retention period.
put-retention-policy
It won't have an immediate effect, you will have to wait ca. a day, but after that all the old data will be deleted. The name of the old streams and their meta data (last event time, creation time, etc.) will remain though, but you won't be charged for that (as far as I can tell based on my own bill).
So it is not exactly what you asked for. However, probably the most important reason why one would want to delete all the log streams is to delete the logged data (to reduce costs, or for compliance reasons), and this approach achieves that.
WARNING: Don't forget to change the retention policy after the old data is gone, or you will continually delete data after 1 day, and chances are, it is not what you want in the long run.
If you are doing this in zshell /zsh and you only need simple one liner command then just update the values :
* Pattern
AWS_SECRET_ACCESS_KEY
AWS_ACCESS_KEY_ID
AWS_DEFAULT_REGION
Pattern can any text , you can also add ^ for begging of the line or $ for end of the line.
run the below command !
Pattern="YOUR_PATTERN" && setupKeys="AWS_ACCESS_KEY_ID=YOUR_KEY AWS_SECRET_ACCESS_KEY=YOUR_KEY AWS_DEFAULT_REGION=YOUR_REGION" &&
eval "${setupKeys} aws logs describe-log-groups --query 'logGroups[*].logGroupName' --output table | sed 's/|//g'| sed 's/\s//g'| grep -i ${Pattern} "| while read x; do echo "deleting $x" && $setupKeys aws logs delete-log-group --log-group-name $x; done
--log-group-name is not optional in aws cli, you can try using an * for --log-group-name value (in test environment)
aws logs delete-log-group --log-group-name my-logs
Reference URL:
http://docs.aws.amazon.com/cli/latest/reference/logs/delete-log-group.html
If you are using a prefix, you could use the following command.
aws logs describe-log-streams --log-group-name <log_group_name> --log-stream-name-prefix"<give_a_log_group_prefix>" --query 'logStreams[*].logStreamName' --output table | awk '{print $2}' | grep -v ^$ | while read x; do aws logs delete-log-stream --log-group-name <log_group_name> --log- stream-name $x;done;
I am trying to figure out on what the s3cmd command would be to download files from bucket by date, so for example i have a bucket named "test" and in that bucket there are different files from different dates. I am trying to get the files that were uploaded yesterday. what would the command be?
There is no single command that will allow you to do that. You have to write a script some thing like this. Or use a SDK that allows you to do this. Below script is a sample script that will get S3 files from last 30 days.
#!/bin/bash
# Usage: ./getOld "bucketname" "30 days"
s3cmd ls s3://$1 | while read -r line; do
createDate=`echo $line|awk {'print $1" "$2'}`
createDate=`date -d"$createDate" +%s`
olderThan=`date -d"-$2" +%s`
if [[ $createDate -lt $olderThan ]]
then
fileName=`echo $line|awk {'print $4'}`
echo $fileName
if [[ $fileName != "" ]]
then
s3cmd get "$fileName"
fi
fi
done;
I like s3cmd but to work with single line command, I prefer the JSon output of aws cli and jq JSon processor
The command will look like
aws s3api list-objects --bucket "yourbucket" |\
jq '.Contents[] | select(.LastModified | startswith("yourdate")).Key' --raw-output |\
xargs -I {} aws s3 cp s3://yourbucket/{} .
basically what the script does
list all object from a given bucket
(the interesting part) jq will parse the Contents array and select element where the LastModified value start with your pattern (you will need to change), get the Key of the s3 object and add --raw-output so it strips the quote from the value
pass the result to an aws copy command to download the file from s3
if you want to automate a bit further you can get yesterday from the command line
for mac os
$ export YESTERDAY=`date -v-1w +%F`
$ aws s3api list-objects --bucket "ariba-install" |\
jq '.Contents[] | select(.LastModified | startswith('\"$YESTERDAY\"')).Key' --raw-output |\
xargs -I {} aws s3 cp s3://ariba-install/{} .
for linux os (or other flavor of bash that I am not familiar)
$ export YESTERDAY=`date -d "1 day ago" '+%Y-%m-%d' `
$ aws s3api list-objects --bucket "ariba-install" |\
jq '.Contents[] | select(.LastModified | startswith('\"$YESTERDAY\"')).Key' --raw-output |\
xargs -I {} aws s3 cp s3://ariba-install/{} .
Now you get the idea if you want to change the YESTERDAY variable to have different kind of date
I'm trying to remove all my AWS EC2 snapshots except the last 6 with this script:
#!/bin/bash
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games
# Backup script
Volume="{VOL-DATA}"
Owner="{OWNER}"
Description="{DESCRIPTION}"
Local_numbackups=6
Local_region="us-west-1"
# Remove old snapshots associated to a description, keep the last $Local_numbackups
aws ec2 describe-snapshots --filters Name=description,Values=$Description | grep "SnapshotId" | head -n -$Local_numbackups | awk '{print $2}' | sed -e 's/,//g' | xargs -n 1 -t aws ec2 delete-snapshot --snapshot-id
However it doesn't work. It deletes instances, but not the oldest ones. Why?
You're trying to do something too complex to be handled (gracefully) in one line, so we'll need to break it down a bit. First, let's get the snapshots sorted by age, oldest to newest:
aws ec2 describe-snapshots --filters Name=description,Values=$Description --query 'Snapshots[*].[StartTime,SnapshotId]' --output text | sort -n
Then we can drop the StartTime field to get the snapshot ID alone:
aws ec2 describe-snapshots --filters Name=description,Values=$Description --query 'Snapshots[*].[StartTime,SnapshotId]' --output text | sort -n | sed -e 's/^.*\t//'
head (or tail) aren't really suitable for discarding the fixed number of snapshots we want to keep. We need to filter those out another way. So, putting it altogether:
# Get array of snapshot IDs sorted by age (oldest to newest)
snapshots=($(aws ec2 describe-snapshots --filters Name=description,Values=$Description --query 'Snapshots[*].[StartTime,SnapshotId]' --output text | sort -n | sed -e 's/^.*\t//'))
# Get number of snapshots
count=${#snapshots[#]}
if [ "$count" -lt "$Local_numbackups" ]; then
echo "We already have less than $Local_numbackups snapshots"
exit 0
else
# Drop the last (newest) $Local_numbackups IDs from the array
snapshots=(${snapshots[#]:0:$((count - Local_numbackups))})
# Loop through the remaining snapshots and delete
for snapshot in ${snapshots[#]}; do
aws ec2 delete-snapshot --snapshot-id $snapshot
done
fi
(While it's obviously possible to do this in bash with the AWS CLI, it's complex enough that I'd personally rather use a more robust language and the AWS SDK.)
Here is a sample.
days2keep="30"
region="us-west-2"
name="jdoe"
#date - -v is for Osx
cutoffdate=`date -j -v-${days2keep}d '+%Y-%m-%d'`
echo "Finding list of snapshots before $cutoffdate "
oldsnapids=$(aws ec2 describe-snapshots --region $region --filters Name=tag:Name,Values=$name --query Snapshots[?StartTime\<=\`$cutoffdate\`].SnapshotId --output text)
for snapid in $oldsnapids
do
echo Deleting snapshot $snapid
aws ec2 delete-snapshot --snapshot-id $snapid --region $region
done
We can delete all old snapshots using below steps:-
List out all snapshots ID's they are old and put in one file like:- /opt/snapshot.txt
And then use "aws configure" command for setup access AWS account from command line, at this time we need to provide credentials:-
Such as:
AWS Access Key ID [None]: XXXXXXXXXXXXXXXXXX
AWS Secret Access Key [None]: XXXXXXXXXXXXXXXXXXXXX
Default region name [None]: XXXXXXXXXXXXXXXX
After that we can use below shell script, we need to give snapshots ID's file name
Codes:
#!/bin/bash
list=$(cat /opt/snapshot.txt)
for i in $list
do
aws ec2 delete-snapshot --snapshot-id $i
if [ $? -eq 0 ]; then
echo Going Good
else
echo FAIL
fi
done
Thanks