AWS IAM - How to show describe policy statements using the CLI? - amazon-web-services

How can I use the AWS CLI to show an IAM policy's full body including the Effect, Action and Resource statements?
"aws iam list-policies" command lists all the policies but not the actual JSON E,A,R statements contained within the policy.
I could use the "aws iam get-policy-version" command but this does not show the policy name in its output. When I am running this command via a script to obtain information for dozens of policies, there is no way to know which policy the output will belong to.
Is there another way of doing this?

The only to do this as you've said is the following:
Get all IAM Policies via the list-policies verb.
Loop over the output, taking the "PolicyId" and "DefaultVersionId".
Pass these into the get-policy-version verb.
Map the PolicyName from the iteration to the PolicyVersion.Document value in the second request.

Slight modification to #uberhumus suggestion to reduce the number of policies that will be extracted . Use the --scope Local qualifier in the query to limit it . Otherwise it will spit out 100's of policies in the account . limiting the scope to local will only list policies which are user provisioned in the account ... Here's the modified version :
RAW_POLICIES=$(aws iam list-policies **--scope Local** --query Policies[].[Arn,PolicyName,DefaultVersionId])
POLICIES=$(echo $RAW_POLICIES | tr -d " " | sed 's/\],/\]\n/g')
for POLICY in $POLICIES
do echo $POLICY | cut -d '"' -f 4
echo -e "---------------\n"
aws iam get-policy-version --version-id $(echo $POLICY | cut -d '"' -f 6) --policy-arn $(echo $POLICY | cut -d '"' -f 2)
echo -e "\n-----------------\n"
done

As mokugo-devops said in his answer, and you stated in your question, you could only use "get-policy-version" to get the proper JSON. Here is how I would do it:
RAW_POLICIES=$(aws iam list-policies --query Policies[].[Arn,PolicyName,DefaultVersionId])
POLICIES=$(echo $RAW_POLICIES | tr -d " " | sed 's/\],/\]\n/g')
for POLICY in $POLICIES
do echo $POLICY | cut -d '"' -f 4
echo -e "---------------\n"
aws iam get-policy-version --version-id $(echo $POLICY | cut -d '"' -f 6) --policy-arn $(echo $POLICY | cut -d '"' -f 2)
echo -e "\n-----------------\n"
done
Now a bit of explanation about the script:
RAW_POLICIES will get you a giant list of arrays that would each contain the name of the policy as requested and the Policy ARN, and Default Version ID as needed. It would however contain spaces that would make iterating over it directly in bash less comfortable (though not impossible for the sufficiently stubborn).
To make the upcoming loop more easy we will clean the spaces and then use sed to insert the spaces we will need. This is done in the 2nd line which defines the POLICIES variable.
This leaves us very little to do in the actual loop. Here we just print the Policy name, some pretty lines and invoke the function that you predicted will be the one used, get-policy-version.

Related

Best method to renew periodically your AWS access keys

I realized I never renewed muy AWS access keys, and they are credentials that should be renewed periodically in order to avoid attacks.
So... which is the best way to renew them automatically without any impact, if they are used just form my laptop?
Finally I created this bash script:
#!/bin/bash
set -e # exit on non-zero command
set -u # force vars to be declared
set -o pipefail # avoids errors in pipelines to be masked
echo "retrieving current account id..."
current_access_key_list=$(aws iam list-access-keys | jq -r '.AccessKeyMetadata')
number_of_current_access_keys=$(echo $current_access_key_list| jq length)
current_access_key=$(echo $current_access_key_list | jq -r '.[]|.AccessKeyId')
if [[ ! "$number_of_current_access_keys" == "1" ]]; then
echo "ERROR: There already are more than 1 access key"
exit 1
fi
echo "Current access key is ${current_access_key}"
echo "creating a new access key..."
new_access_key=$(aws iam create-access-key)
access_key=$(echo $new_access_key| jq -r '.AccessKey.AccessKeyId')
access_key_secret=$(echo $new_access_key| jq -r '.AccessKey.SecretAccessKey')
echo "New access key is: ${access_key}"
echo "performing credentials backup..."
cp ~/.aws/credentials ~/.aws/credentials.bak
echo "changing local credentials..."
aws configure set aws_access_key_id "${access_key}"
aws configure set aws_secret_access_key "${access_key_secret}"
echo "wait 10 seconds to ensure new access_key is set..."
sleep 10
echo "check new credentials work fine"
aws iam get-user | jq -r '.User'
echo "removing old access key $current_access_key"
aws iam delete-access-key --access-key-id $current_access_key
echo "Congrats. You are using the new credentials."
echo "Feel free to remove the backup file:"
echo " rm ~/.aws/credentials.bak"
I placed that script into ~/.local/bin to ensure it is in the path, and then I added these lines at the end of my .bashrc and/or .zshrc files:
# rotate AWS keys if they are too old
if [[ -n "$(find ~/.aws -mtime +30 -name credentials)" ]]; then
AWS_PROFILE=profile-1 rotate_aws_access_key
AWS_PROFILE=profile-2 rotate_aws_access_key
fi
So any time I open a terminal (what is really frequently) it will check if the credentials file was not modified in more than one month and will try to renew my credentials automatically.
The worst thing that might happen is that it could create the new access key and not update my script, what should force me to remove it by hand.

How do I list WAF objects that do not have any resources using the AWS CLI?

I'd like to list all objects in WAF that do not have resources connected to them using the aws cli in my terminal.
Is there anyway I can do this using the aws wafv2 list-web-acl --name --scope <value> AWS cli command with other perimeters?
Thanks
Looks like there's no cmd for that so I created a script to have the results placed in a file. Might come handy if needed by anyone on here
#!/bin/bash
#list the web acl objects with their corresponding arn and save it in a file
aws wafv2 list-web-acls --scope REGIONAL | grep "ARN" > output.txt
# Next generate only the ARN nos and save output in a seperate file
awk -F\" '{print $4}' output.txt > input.txt
#Create a file to store ARN numbers together with their resources attached
touch resources.txt
#loop through each line and generate the resource attached to an ARN object based on its ARN no
while read p; do
echo $p >> resources.txt && \
aws wafv2 list-resources-for-web-acl --web-acl-arn $p >> resources.txt && \
echo ------------------------ >> resources.txt
#echo -e ' \t ' >> resources.txt
done < input.txt
#remove unwanted files
rm input.txt output.txt
#list webacl objects that do not have resources attached to them
grep -B 3 "\[\]" resources.txt | grep "webacl"
#remove any files left
rm resources.txt

delete all log streams of a log group using aws cli

In order to delete a log stream from a log group using the CLI command , individual log stream names are required .
Is there a way to delete all log streams belonging to a log group using a single command?
You can achieve this through using --query to target the results of describe-log-streams. This allows you to loop through and delete the results.
aws logs describe-log-streams --log-group-name $LOG_GROUP_NAME --query 'logStreams[*].logStreamName' --output table | awk '{print $2}' | grep -v ^$ | while read x; do aws logs delete-log-stream --log-group-name $LOG_GROUP_NAME --log-stream-name $x; done
You can use --query to target all or specific groups or streams.
Delete streams from a specific month
aws logs describe-log-streams --log-group-name $LOG_GROUP --query 'logStreams[?starts_with(logStreamName,`2017/07`)].logStreamName' --output table | awk '{print $2}' | grep -v ^$ | while read x; do aws logs delete-log-stream --log-group-name $LOG_GROUP --log-stream-name $x; done
Delete All log groups - Warning, it deletes EVERYTHING!
aws logs describe-log-groups --query 'logGroups[*].logGroupName' --output table | awk '{print $2}' | grep -v ^$ | while read x; do aws logs delete-log-group --log-group-name $x; done
Clearing specific log groups
aws logs describe-log-groups --query 'logGroups[?starts_with(logGroupName,`$LOG_GROUP_NAME`)].logGroupName' --output table | awk '{print $2}' | grep -v ^$ | while read x; do aws logs delete-log-group --log-group-name $x; done
Credit
Implemented script with command from #Stephen's answer. The script shows summary before deletion and tracks progress of deletion.
#!/usr/bin/env bash
LOG_GROUP_NAME=${1:?log group name is not set}
echo Getting stream names...
LOG_STREAMS=$(
aws logs describe-log-streams \
--log-group-name ${LOG_GROUP_NAME} \
--query 'logStreams[*].logStreamName' \
--output table |
awk '{print $2}' |
grep -v ^$ |
grep -v DescribeLogStreams
)
echo These streams will be deleted:
printf "${LOG_STREAMS}\n"
echo Total $(wc -l <<<"${LOG_STREAMS}") streams
echo
while true; do
read -p "Prceed? " yn
case $yn in
[Yy]*) break ;;
[Nn]*) exit ;;
*) echo "Please answer yes or no." ;;
esac
done
for name in ${LOG_STREAMS}; do
printf "Delete stream ${name}... "
aws logs delete-log-stream --log-group-name ${LOG_GROUP_NAME} --log-stream-name ${name} && echo OK || echo Fail
done
Github link
To delete all log streams associated with a specific log group, run the following command, replacing NAME_OF_LOG_GROUP with your group:
aws logs describe-log-streams --log-group-name NAME_OF_LOG_GROUP --output text | awk '{print $7}' | while read x;
do aws logs delete-log-stream --log-group-name NAME_OF_LOG_GROUP --log-stream-name $x
done
Here is Script to delete all logs in a log group using python. Just change the logGroupName to match your logGroup.
import boto3
client = boto3.client('logs')
response = client.describe_log_streams(
logGroupName='/aws/batch/job'
)
def delete_stream(stream):
delete_response = client.delete_log_stream(
logGroupName='/aws/batch/job',
logStreamName=stream['logStreamName']
)
print(delete_response)
results = map(lambda x: delete_stream(x), response['logStreams'])
Based on #german-lashevich's answer
If you have thousands of log streams, you will needed to parallelize.
#!/usr/bin/env bash
LOG_GROUP_NAME=${1:?log group name is not set}
echo Getting stream names...
LOG_STREAMS=$(
aws logs describe-log-streams \
--log-group-name ${LOG_GROUP_NAME} \
--query 'logStreams[*].logStreamName' \
--output table |
awk '{print $2}' |
grep -v ^$ |
grep -v DescribeLogStreams
)
echo These streams will be deleted:
printf "${LOG_STREAMS}\n"
echo Total $(wc -l <<<"${LOG_STREAMS}") streams
echo
while true; do
read -p "Prceed? " yn
case $yn in
[Yy]*) break ;;
[Nn]*) exit ;;
*) echo "Please answer yes or no." ;;
esac
done
step() {
local name=$1
printf "Delete stream ${name}... "
aws logs delete-log-stream --log-group-name ${LOG_GROUP_NAME} --log-stream-name ${name} && echo OK || echo Fail
}
N=20
for name in ${LOG_STREAMS}; do ((i=i%N)); ((i++==0)) && wait ; step "$name" & done
This cannot be done using a single aws Cli command. Hence we achieved this using a script where we first retrieved all the log streams of a log group and then deleted them in a loop.
For Windows users this powershell script could be usefull, to remove all the log streams in a log group:
#Set your log group name
$log_group_name = "/production/log-group-name"
aws logs describe-log-streams --log-group-name $log_group_name --query logStreams --output json | ConvertFrom-json | ForEach-Object {$_.logStreamName} | ForEach-Object {
aws logs delete-log-stream --log-group-name $log_group_name --log-stream-name $_
Write-Host ($_ + " -> deleted") -ForegroundColor Green
}
Just save it as your_script_name.ps1 and execute it in powershell.
An alternative version using Powershell CLI on Windows, launch powershell command line and use:
$LOG_GROUP_NAME="cloud-watch-group-name";
$LOG_STREAM_NAMEP="cloud-watch-log-stream-name";
Set-DefaultAWSRegion -Region us-your-regions;
Set-AWSCredential -AccessKey ACCESSKEYEXAMPLE -SecretKey sEcReTKey/EXamPLE/xxxddddEXAMPLEKEY -StoreAs MyProfileName
Get-CWLLogStream -loggroupname $LOG_GROUP_NAME -logstreamnameprefix $LOG_GROUP_NAMEP | Remove-CWLLogStream -LogGroupName $LOG_GROUP_NAME;
You may use -Force parameter on the Remove-CWLogStream Cmdlet in case you donĀ“t want to confirm one by one.
References
https://docs.aws.amazon.com/powershell/latest/reference/Index.html
The others have already described how you can paginate through all the log streams and delete them one by one.
I would like to offer two alternative ways that have (more or less) the same effect, but don't require you to loop through all the log streams.
Deleting the log group, then re-creating it has the desired effect: All the log streams of the log group will be deleted.
delete-log-group
followed by:
create-log-group
CAVEAT: Deleting a log group can have unintended consequences. For example, subscriptions and the retention policy will be deleted as well, and those have to be restored too when the log group is re-created.
Another workaround is to set a 1 day retention period.
put-retention-policy
It won't have an immediate effect, you will have to wait ca. a day, but after that all the old data will be deleted. The name of the old streams and their meta data (last event time, creation time, etc.) will remain though, but you won't be charged for that (as far as I can tell based on my own bill).
So it is not exactly what you asked for. However, probably the most important reason why one would want to delete all the log streams is to delete the logged data (to reduce costs, or for compliance reasons), and this approach achieves that.
WARNING: Don't forget to change the retention policy after the old data is gone, or you will continually delete data after 1 day, and chances are, it is not what you want in the long run.
If you are doing this in zshell /zsh and you only need simple one liner command then just update the values :
* Pattern
AWS_SECRET_ACCESS_KEY
AWS_ACCESS_KEY_ID
AWS_DEFAULT_REGION
Pattern can any text , you can also add ^ for begging of the line or $ for end of the line.
run the below command !
Pattern="YOUR_PATTERN" && setupKeys="AWS_ACCESS_KEY_ID=YOUR_KEY AWS_SECRET_ACCESS_KEY=YOUR_KEY AWS_DEFAULT_REGION=YOUR_REGION" &&
eval "${setupKeys} aws logs describe-log-groups --query 'logGroups[*].logGroupName' --output table | sed 's/|//g'| sed 's/\s//g'| grep -i ${Pattern} "| while read x; do echo "deleting $x" && $setupKeys aws logs delete-log-group --log-group-name $x; done
--log-group-name is not optional in aws cli, you can try using an * for --log-group-name value (in test environment)
aws logs delete-log-group --log-group-name my-logs
Reference URL:
http://docs.aws.amazon.com/cli/latest/reference/logs/delete-log-group.html
If you are using a prefix, you could use the following command.
aws logs describe-log-streams --log-group-name <log_group_name> --log-stream-name-prefix"<give_a_log_group_prefix>" --query 'logStreams[*].logStreamName' --output table | awk '{print $2}' | grep -v ^$ | while read x; do aws logs delete-log-stream --log-group-name <log_group_name> --log- stream-name $x;done;

looking for s3cmd download command for a certain date

I am trying to figure out on what the s3cmd command would be to download files from bucket by date, so for example i have a bucket named "test" and in that bucket there are different files from different dates. I am trying to get the files that were uploaded yesterday. what would the command be?
There is no single command that will allow you to do that. You have to write a script some thing like this. Or use a SDK that allows you to do this. Below script is a sample script that will get S3 files from last 30 days.
#!/bin/bash
# Usage: ./getOld "bucketname" "30 days"
s3cmd ls s3://$1 | while read -r line; do
createDate=`echo $line|awk {'print $1" "$2'}`
createDate=`date -d"$createDate" +%s`
olderThan=`date -d"-$2" +%s`
if [[ $createDate -lt $olderThan ]]
then
fileName=`echo $line|awk {'print $4'}`
echo $fileName
if [[ $fileName != "" ]]
then
s3cmd get "$fileName"
fi
fi
done;
I like s3cmd but to work with single line command, I prefer the JSon output of aws cli and jq JSon processor
The command will look like
aws s3api list-objects --bucket "yourbucket" |\
jq '.Contents[] | select(.LastModified | startswith("yourdate")).Key' --raw-output |\
xargs -I {} aws s3 cp s3://yourbucket/{} .
basically what the script does
list all object from a given bucket
(the interesting part) jq will parse the Contents array and select element where the LastModified value start with your pattern (you will need to change), get the Key of the s3 object and add --raw-output so it strips the quote from the value
pass the result to an aws copy command to download the file from s3
if you want to automate a bit further you can get yesterday from the command line
for mac os
$ export YESTERDAY=`date -v-1w +%F`
$ aws s3api list-objects --bucket "ariba-install" |\
jq '.Contents[] | select(.LastModified | startswith('\"$YESTERDAY\"')).Key' --raw-output |\
xargs -I {} aws s3 cp s3://ariba-install/{} .
for linux os (or other flavor of bash that I am not familiar)
$ export YESTERDAY=`date -d "1 day ago" '+%Y-%m-%d' `
$ aws s3api list-objects --bucket "ariba-install" |\
jq '.Contents[] | select(.LastModified | startswith('\"$YESTERDAY\"')).Key' --raw-output |\
xargs -I {} aws s3 cp s3://ariba-install/{} .
Now you get the idea if you want to change the YESTERDAY variable to have different kind of date

Recover Deleted Objects From Amazon S3

I have a bucket (version enabled), how can i get back the objects that are accidentally permanent deleted from my bucket.
I have created a script to restore the objects with deletemarker. You'll have to input it like below:
sh Undelete_deletemarker.sh bucketname path/to/certain/folder
**Script:**
#!/bin/bash
#please provide the bucketname and path to destination folder to restore
# Remove all versions and delete markers for each object
aws s3api list-object-versions --bucket $1 --prefix $2 --output text |
grep "DELETEMARKERS" | while read obj
do
KEY=$( echo $obj| awk '{print $3}')
VERSION_ID=$( echo $obj | awk '{print $5}')
echo $KEY
echo $VERSION_ID
aws s3api delete-object --bucket $1 --key $KEY --version-id
$VERSION_ID
done
Happy Coding! ;)
Thank you, Kc Bickey, this script works wonderfully! Only thing I might add for others is to make sure " $VERSION_ID" immediately follows "--version-id" on line 12. The forum seems to have wrapped " $VERSION_ID" to the next line and it causes the script to error until that's corrected.
**Script:**
#!/bin/bash
#please provide the bucketname and path to destination folder to restore
# Remove all versions and delete markers for each object
aws s3api list-object-versions --bucket $1 --prefix $2 --output text |
grep "DELETEMARKERS" | while read obj
do
KEY=$( echo $obj| awk '{print $3}')
VERSION_ID=$( echo $obj | awk '{print $5}')
echo $KEY
echo $VERSION_ID
aws s3api delete-object --bucket $1 --key $KEY --version-id $VERSION_ID
done
with bucket versioning enable to permanently delete an object you need to specifically mention the version of the object DELETE Object versionId
If you've done so you cannot recover this specific version, you get access to previous version
When versioning is enabled, a simple DELETE cannot permanently delete an object. Instead, Amazon S3 inserts a delete marker in the bucket so you can recover from this specific marker, but if the marker is deleted (and you mention it was permanent deleted) you cannot recover
did you enable Cross-Region Replication ? If so you can retrieve the object in the other region:
If a DELETE request specifies a particular object version ID to delete, Amazon S3 will delete that object version in the source bucket, but it will not replicate the deletion in the destination bucket (in other words, it will not delete the same object version from the destination bucket). This behavior protects data from malicious deletions.
Edit: If you have versioning enabled on your bucket you should get the Versions Hide/Show toggle button and when Show is selected you should have the additional Version ID column as per the screenshot from my bucket
If your bucket objects has white spaces in filename, previous scripts may not work properly. This script take the key including white spaces.
#!/bin/bash
#please provide the bucketname and path to destination folder to restore
# Remove all versions and delete markers for each object
aws s3api list-object-versions --bucket $1 --prefix $2 --output text |
grep "DELETEMARKERS" | while read obj
do
KEY=$( echo $obj| awk '{indice=index($0,$(NF-1))-index($0,$3);print substr($0, index($0,$3), indice-1)}')
VERSION_ID=$( echo $obj | awk '{print $NF}')
echo $KEY
echo $VERSION_ID
aws s3api delete-object --bucket $1 --key "$KEY" --version-id $VERSION_ID
done
This version of the script worked really well for me. I have a bucket that has a directory with 180,000 items in it, and this one chews through them and restores all the files that are in a directory/folder that is within the bucket.
If you just need to restore all the items in a bucket that don't have a directory, then you can just drop the prefix parameter.
#!/bin/bash
BUCKET=mybucketname
DIRECTORY=myfoldername
function run() {
aws s3api list-object-versions --bucket ${BUCKET_NAME} --prefix="${DIRECTORY}" --query='{Objects: DeleteMarkers[].{Key:Key}}' --output text |
while read KEY
do
if [[ "$KEY" == "None" ]]; then
continue
else
KEY=$(echo ${KEY} | awk '{$1=""; print $0}' | sed "s/^ *//g")
VERSION=$(aws s3api list-object-versions --bucket ${BUCKET_NAME} --prefix="$KEY" --query='{Objects: DeleteMarkers[].{VersionId:VersionId}}' --output text | awk '{$1=""; print $0}' | sed "s/^ *//g")
echo ${KEY}
echo ${VERSION}
fi
aws s3api delete-object --bucket ${BUCKET_NAME} --key="${KEY}" --version-id ${VERSION}
done
}
Note, running this script two times will run, but it won't work. It will just return the same record in the second script, so it doesn't really do anything. If you had a massive bucket, I might setup 3-4 scripts that filter by files that start with a certain letter/number. At least this way you can start working on files deeper down in the bucket.