Best method to renew periodically your AWS access keys - amazon-web-services

I realized I never renewed muy AWS access keys, and they are credentials that should be renewed periodically in order to avoid attacks.
So... which is the best way to renew them automatically without any impact, if they are used just form my laptop?

Finally I created this bash script:
#!/bin/bash
set -e # exit on non-zero command
set -u # force vars to be declared
set -o pipefail # avoids errors in pipelines to be masked
echo "retrieving current account id..."
current_access_key_list=$(aws iam list-access-keys | jq -r '.AccessKeyMetadata')
number_of_current_access_keys=$(echo $current_access_key_list| jq length)
current_access_key=$(echo $current_access_key_list | jq -r '.[]|.AccessKeyId')
if [[ ! "$number_of_current_access_keys" == "1" ]]; then
echo "ERROR: There already are more than 1 access key"
exit 1
fi
echo "Current access key is ${current_access_key}"
echo "creating a new access key..."
new_access_key=$(aws iam create-access-key)
access_key=$(echo $new_access_key| jq -r '.AccessKey.AccessKeyId')
access_key_secret=$(echo $new_access_key| jq -r '.AccessKey.SecretAccessKey')
echo "New access key is: ${access_key}"
echo "performing credentials backup..."
cp ~/.aws/credentials ~/.aws/credentials.bak
echo "changing local credentials..."
aws configure set aws_access_key_id "${access_key}"
aws configure set aws_secret_access_key "${access_key_secret}"
echo "wait 10 seconds to ensure new access_key is set..."
sleep 10
echo "check new credentials work fine"
aws iam get-user | jq -r '.User'
echo "removing old access key $current_access_key"
aws iam delete-access-key --access-key-id $current_access_key
echo "Congrats. You are using the new credentials."
echo "Feel free to remove the backup file:"
echo " rm ~/.aws/credentials.bak"
I placed that script into ~/.local/bin to ensure it is in the path, and then I added these lines at the end of my .bashrc and/or .zshrc files:
# rotate AWS keys if they are too old
if [[ -n "$(find ~/.aws -mtime +30 -name credentials)" ]]; then
AWS_PROFILE=profile-1 rotate_aws_access_key
AWS_PROFILE=profile-2 rotate_aws_access_key
fi
So any time I open a terminal (what is really frequently) it will check if the credentials file was not modified in more than one month and will try to renew my credentials automatically.
The worst thing that might happen is that it could create the new access key and not update my script, what should force me to remove it by hand.

Related

How to Check if AWS Named Configure profile exists

How do I check if a named profile exists before I attempt to use it ?
aws cli will throw an ugly error if I attempt to use a non-existent profile, so I'd like to do something like this :
$(awsConfigurationExists "${profile_name}") && aws iam list-users --profile "${profile_name}" || echo "can't do it!"
Method 1 - Check entries in the .aws/config file
function awsConfigurationExists() {
local profile_name="${1}"
local profile_name_check=$(cat $HOME/.aws/config | grep "\[profile ${profile_name}]")
if [ -z "${profile_name_check}" ]; then
return 1
else
return 0
fi
}
Method 2 - Check results of aws configure list , see aws-cli issue #819
function awsConfigurationExists() {
local profile_name="${1}"
local profile_status=$( (aws configure --profile ${1} list) 2>&1)
if [[ $profile_status = *'could not be found'* ]]; then
return 1
else
return 0
fi
}
usage
$(awsConfigurationExists "my-aws-profile") && echo "does exist" || echo "does not exist"
or
if $(awsConfigurationExists "my-aws-profile"); then
echo "does exist"
else
echo "does not exist"
fi
I was stuck with the same problem and the proposed answer did not work for me.
Here is my solution with aws-cli/2.8.5 Python/3.9.11 Darwin/21.6.0 exe/x86_64 prompt/off:
export AWS_PROFILE=localstack
aws configure list-profiles | grep -q "${AWS_PROFILE}"
if [ $? -eq 0 ]; then
echo "AWS Profile [$AWS_PROFILE] already exists"
else
echo "Creating AWS Profile [$AWS_PROFILE]"
aws configure --profile $AWS_PROFILE set aws_access_key_id test
aws configure --profile $AWS_PROFILE set aws_secret_access_key test
fi

AWS IAM - How to show describe policy statements using the CLI?

How can I use the AWS CLI to show an IAM policy's full body including the Effect, Action and Resource statements?
"aws iam list-policies" command lists all the policies but not the actual JSON E,A,R statements contained within the policy.
I could use the "aws iam get-policy-version" command but this does not show the policy name in its output. When I am running this command via a script to obtain information for dozens of policies, there is no way to know which policy the output will belong to.
Is there another way of doing this?
The only to do this as you've said is the following:
Get all IAM Policies via the list-policies verb.
Loop over the output, taking the "PolicyId" and "DefaultVersionId".
Pass these into the get-policy-version verb.
Map the PolicyName from the iteration to the PolicyVersion.Document value in the second request.
Slight modification to #uberhumus suggestion to reduce the number of policies that will be extracted . Use the --scope Local qualifier in the query to limit it . Otherwise it will spit out 100's of policies in the account . limiting the scope to local will only list policies which are user provisioned in the account ... Here's the modified version :
RAW_POLICIES=$(aws iam list-policies **--scope Local** --query Policies[].[Arn,PolicyName,DefaultVersionId])
POLICIES=$(echo $RAW_POLICIES | tr -d " " | sed 's/\],/\]\n/g')
for POLICY in $POLICIES
do echo $POLICY | cut -d '"' -f 4
echo -e "---------------\n"
aws iam get-policy-version --version-id $(echo $POLICY | cut -d '"' -f 6) --policy-arn $(echo $POLICY | cut -d '"' -f 2)
echo -e "\n-----------------\n"
done
As mokugo-devops said in his answer, and you stated in your question, you could only use "get-policy-version" to get the proper JSON. Here is how I would do it:
RAW_POLICIES=$(aws iam list-policies --query Policies[].[Arn,PolicyName,DefaultVersionId])
POLICIES=$(echo $RAW_POLICIES | tr -d " " | sed 's/\],/\]\n/g')
for POLICY in $POLICIES
do echo $POLICY | cut -d '"' -f 4
echo -e "---------------\n"
aws iam get-policy-version --version-id $(echo $POLICY | cut -d '"' -f 6) --policy-arn $(echo $POLICY | cut -d '"' -f 2)
echo -e "\n-----------------\n"
done
Now a bit of explanation about the script:
RAW_POLICIES will get you a giant list of arrays that would each contain the name of the policy as requested and the Policy ARN, and Default Version ID as needed. It would however contain spaces that would make iterating over it directly in bash less comfortable (though not impossible for the sufficiently stubborn).
To make the upcoming loop more easy we will clean the spaces and then use sed to insert the spaces we will need. This is done in the 2nd line which defines the POLICIES variable.
This leaves us very little to do in the actual loop. Here we just print the Policy name, some pretty lines and invoke the function that you predicted will be the one used, get-policy-version.

Environment Variables in newest AWS EC2 instance

I am trying to get ENVIRONMENT Variables into the EC2 instance (trying to run a django app on Amazon Linux AMI 2018.03.0 (HVM), SSD Volume Type ami-0ff8a91507f77f867 ). How do you get them in the newest version of amazon's linux, or get the logging so it can be traced.
user-data text (modified from here):
#!/bin/bash
#trying to get a file made
touch /tmp/testfile.txt
cat 'This and that' > /tmp/testfile.txt
#trying to log
echo 'Woot!' > /home/ec2-user/user-script-output.txt
#Trying to get the output logged to see what is going wrong
exec > >(tee /var/log/user-data.log|logger -t user-data ) 2>&1
#trying to log
echo "XXXXXXXXXX STARTING USER DATA SCRIPT XXXXXXXXXXXXXX"
#trying to store the ENVIRONMENT VARIABLES
PARAMETER_PATH='/'
REGION='us-east-1'
# Functions
AWS="/usr/local/bin/aws"
get_parameter_store_tags() {
echo $($AWS ssm get-parameters-by-path --with-decryption --path ${PARAMETER_PATH} --region ${REGION})
}
params_to_env () {
params=$1
# If .Ta1gs does not exist we assume ssm Parameteres object.
SELECTOR="Name"
for key in $(echo $params | /usr/bin/jq -r ".[][].${SELECTOR}"); do
value=$(echo $params | /usr/bin/jq -r ".[][] | select(.${SELECTOR}==\"$key\") | .Value")
key=$(echo "${key##*/}" | /usr/bin/tr ':' '_' | /usr/bin/tr '-' '_' | /usr/bin/tr '[:lower:]' '[:upper:]')
export $key="$value"
echo "$key=$value"
done
}
# Get TAGS
if [ -z "$PARAMETER_PATH" ]
then
echo "Please provide a parameter store path. -p option"
exit 1
fi
TAGS=$(get_parameter_store_tags ${PARAMETER_PATH} ${REGION})
echo "Tags fetched via ssm from ${PARAMETER_PATH} ${REGION}"
echo "Adding new variables..."
params_to_env "$TAGS"
Notes -
What i think i know but am unsure
the user-data script is only loaded when it is created, not when I stop and then start mentioned here (although it also says [i think outdated] that the output is logged to /var/log/cloud-init-output.log )
I may not be starting the instance correctly
I don't know where to store the bash script so that it can be executed
What I have verified
the user-data text is on the instance by ssh-ing in and curl http://169.254.169.254/latest/user-data shows the current text (#!/bin/bash …)
What Ive tried
editing rc.local directly to export AWS_ACCESS_KEY_ID='JEFEJEFEJEFEJEFE' … and the like
putting them in the AWS Parameter Store (and can see them via the correct call, I just can't trace getting them into the EC2 instance without logs or confirming if the user-data is getting run)
putting ENV variables in Tags and importing them as mentioned here:
tried outputting the logs to other files as suggested here (Not seeing any log files in the ssh instance or on the system log)
viewing the System Log on the aws webpage to see any errors/logs via selecting the instance -> 'Actions' -> 'Instance Settings' -> 'Get System Log' (not seeing any commands run or log statements [only 1 unrelated word of user])

How to cp file only if it does not exist, throw error otherwise?

aws s3 cp "dist/myfile" "s3://my-bucket/production/myfile"
It always copies myfile to s3 - I would like to copy file ONLY if it does no exist, throw error otherwise. How I can do it? Or at least how I can use awscli to check if file exists?
You could test for the existence of a file by listing the file, and seeing whether it returns something. For example:
aws s3 ls s3://bucket/file.txt | wc -l
This would return a zero (no lines) if the file does not exist.
If you only want to copy a file if it does not exist, try the sync command, e.g.:
aws s3 sync . s3://bucket/ --exclude '*' --include 'file.txt'
This will synchronize the local file with the remote object, only copying it if it does not exist or if the local file is different to the remote object.
So, turns out that "aws s3 sync" doesn't do files, only directories. If you give it a file, you get...interesting...behavior, since it treats anything you give it like a directory and throws a slash on it. At least aws-cli/1.6.7 Python/2.7.5 Darwin/13.4.0 does.
%% date > test.txt
%% aws s3 sync test.txt s3://bucket/test.txt
warning: Skipping file /Users/draistrick/aws/test.txt/. File does not exist.
So, if you -really- only want to sync a file (only upload if exists, and if checksum matches) you can do it:
file="test.txt"
aws s3 sync --exclude '*' --include "$file" "$(dirname $file)" "s3://bucket/"
Note the exclude/include order - if you reverse that, it won't include anything. And your source and include path need to have sanity around their matching, so maybe a $(basename $file) is in order for --include if you're using full paths... aws --debug s3 sync is your friend here to see how the includes evaluate.
And don't forget the target is a directory key, not a file key.
Here's a working example:
%% file="test.txt"
%% date >> $file
%% aws s3 sync --exclude '*' --include "$file" "$(dirname $file)" "s3://bucket/"
upload: ./test.txt to s3://bucket/test.txt/test.txt
%% aws s3 sync --exclude '*' --include "$file" "$(dirname $file)" "s3://bucket/"
%% date >> $file
%% aws s3 sync --exclude '*' --include "$file" "$(dirname $file)" "s3://bucket/"
upload: ./test.txt to s3://bucket/test.txt/test.txt
(now, if only there were a way to ask aws s3 to -just- validate the checksum, since it seems to always do multipart style checksums.. oh, maybe some --dryrun and some output scraping and sync..)
You can do this by listing and copying if and only if the list succeeds.
aws s3 ls "s3://my-bucket/production/myfile" || aws s3 cp "dist/myfile" "s3://my-bucket/production/myfile"
Edit: replaced && to || to have the desired effect of if list fails do copy
You can also check the existence of a file by aws s3api head-object subcommand. An advantage of this over aws s3 ls is that it just requires s3:GetObject permission instead of s3:ListBucket.
$ aws s3api head-object --bucket ${BUCKET} --key ${EXISTENT_KEY}
{
"AcceptRanges": "bytes",
"LastModified": "Wed, 1 Jan 2020 00:00:00 GMT",
"ContentLength": 10,
"ETag": "\"...\"",
"VersionId": "...",
"ContentType": "binary/octet-stream",
"ServerSideEncryption": "AES256",
"Metadata": {}
}
$ echo $?
0
$ aws s3api head-object --bucket ${BUCKET} --key ${NON_EXISTENT_KEY}
An error occurred (403) when calling the HeadObject operation: Forbidden
$ echo $?
255
Note that the HTTP status code for the non-existent object depends on whether you have the s3:ListObject permission. See the API document for more details:
If you have the s3:ListBucket permission on the bucket, Amazon S3 returns an HTTP status code 404 ("no such key") error.
If you don’t have the s3:ListBucket permission, Amazon S3 returns an HTTP status code 403 ("access denied") error.
AWS HACK
You can run the following command to raise ERROR if the file already exists
Run aws s3 sync command to sync the file to s3, it will return the copied path if the file doesn't exist or it will give blank output if it exits
Run wc -c command to check the character count and raise an error if the output is zero
com=$(aws s3 sync dist/ s3://my-bucket/production/ | wc -c);if [[ $com -ne 0
]]; then exit 1; else exit 0; fi;
OR
#!/usr/bin/env bash
com=$(aws s3 sync dist s3://my-bucket/production/ | wc -c)
echo "hello $com"
if [[ $com -ne 0 ]]; then
echo "File already exists"
exit 1
else
echo "success"
exit 0
fi
I voted up aviggiano. Using his example above, I was able to get this to work in my windows .bat file. If the S3 path exists it will throw an error and end the batch job. If the file does not exist it will continue on to perform the copy function. Hope this helps some one.
:Step1
aws s3 ls s3://00000000000-fake-bucket/my/s3/path/inbound/test.txt && ECHO Could not copy to S3 bucket becasue S3 Object already exists, ending script. && GOTO :Failure
ECHO No file found in bucket, begin upload.
aws s3 cp Z:\MY\LOCAL\PATH\test.txt s3://00000000000-fake-bucket/my/s3/path/inbound/test.txt --exclude "*" --include "*.txt"
:Step2
ECHO YOU MADE IT, LET'S CELEBRATE
IF %ERRORLEVEL% == 0 GOTO :Success
GOTO :Failure
:Success
echo Job Endedsuccess
GOTO :ExitScript
:Failure
echo BC_Script_Execution_Complete Failure
GOTO :ExitScript
:ExitScript
I am running AWS on windows. and this is my simple script.
rem clean work files:
if exist SomeFileGroup_remote.txt del /q SomeFileGroup_remote.txt
if exist SomeFileGroup_remote-fileOnly.txt del /q SomeFileGroup_remote-fileOnly.txt
if exist SomeFileGroup_Local-fileOnly.txt del /q SomeFileGroup_Local-fileOnly.txt
if exist SomeFileGroup_remote-Download-fileOnly.txt del /q SomeFileGroup_remote-Download-fileOnly.txt
Rem prep:
call F:\Utilities\BIN\mhedate.cmd
aws s3 ls s3://awsbucket//someuser#domain.com/BulkRecDocImg/folder/folder2/ --recursive >>SomeFileGroup_remote.txt
for /F "tokens=1,2,3,4* delims= " %%i in (SomeFileGroup_remote.txt) do #echo %%~nxl >>SomeFileGroup_remote-fileOnly.txt
dir /b temp\*.* >>SomeFileGroup_Local-fileOnly.txt
findstr /v /I /l /G:"SomeFileGroup_Local-fileOnly.txt" SomeFileGroup_remote-fileOnly.txt >>SomeFileGroup_remote-Download-fileOnly.txt
Rem Download:
for /F "tokens=1* delims= " %%i in (SomeFileGroup_remote-Download-fileOnly.txt) do (aws s3 cp s3://awsbucket//someuser#domain.com/BulkRecDocImg/folder/folder2/%%~nxi "temp" >>"SomeFileGroup_Download_%DATE.YEAR%%DATE.MONTH%%DATE.DAY%.log")
I Added Date to the path in-order to not override the file:
aws cp videos/video_name.mp4 s3://BUCKET_NAME/$(date +%D-%H:%M:%S)
So that way I will have history and the existing file won't be overriddend.

Amazon RDS - Online only when needed?

I had a question about Amazon RDS. I only need the database online for about 2 hours a day but I am dealing with quite a large database at around 1gb.
I have two main questions:
Can I automate bringing my RDS database online and offline via scripts to save money?
When I put a RDS offline to stop the "work hours" counter running and billing me, when I bring it back online will it still have the same content (i.e will all my data stay there, or will it have to be a blank DB?). If so, is there any way around this rather than backing up to S3 and reimporting it every time?
If you wish to do this programatically,
Snapshot the RDS instance using rds-create-db-snapshot http://docs.aws.amazon.com/AmazonRDS/latest/CommandLineReference/CLIReference-cmd-CopyDBSnapshot.html
Delete the running instance using rds-delete-db-instance http://docs.aws.amazon.com/AmazonRDS/latest/CommandLineReference/CLIReference-cmd-DeleteDBInstance.html
Restore the database from the snapshot using rds-restore-db-instance-from-db-snapshot http://docs.aws.amazon.com/AmazonRDS/latest/CommandLineReference/CLIReference-cmd-RestoreDBInstanceFromDBSnapshot.html
You may also do all of this from the AWS Web Console as well, if you wish to do this manually.
You can start EC2* instances using shell scripts, so I guess that you can as well for RDS.
(see http://docs.aws.amazon.com/AmazonRDS....html)
But unlike EC2*, you cannot "stop" an RDS instance without "destroying" it. You need to create a DB snapshot when terminating your database. You will use this DB snapshot when re-starting the database.
*EC2 : Elastic Computing, renting a virtual server or a server.
Here's a script that will stop/start/reboot an RDS instance
#!/bin/bash
# usage ./startStop.sh lhdevices start
INSTANCE="$1"
ACTION="$2"
# export vars to run RDS CLI
export JAVA_HOME=/usr;
export AWS_RDS_HOME=/home/mysql/RDSCli-1.15.001;
export PATH=$PATH:/home/mysql/RDSCli-1.15.001/bin;
export EC2_REGION=us-east-1;
export AWS_CREDENTIAL_FILE=/home/mysql/RDSCli-1.15.001/keysLightaria.txt;
if [ $# -ne 2 ]
then
echo "Usage: $0 {MySQL-Instance Name} {Action either start, stop or reboot}"
echo ""
exit 1
fi
shopt -s nocasematch
if [[ $ACTION == 'start' ]]
then
echo "This will $ACTION a MySQL Instance"
rds-restore-db-instance-from-db-snapshot lhdevices
--db-snapshot-identifier dbStart --availability-zone us-east-1a
--db-instance-class db.m1.small
echo "Sleeping while instance is created"
sleep 10m
echo "waking..."
rds-modify-db-instance lhdevices --db-security-groups kfarrell
echo "Sleeping while instance is modified for security group name"
sleep 5m
echo "waking..."
elif [[ $ACTION == 'stop' ]]
then
echo "This will $ACTION a MySQL Instance"
yes | rds-delete-db-snapshot dbStart
echo "Sleeping while deleting old snapshot "
sleep 10m
#rds-create-db-snapshot lhdevices --db-snapshot-identifier dbStart
# echo "Sleeping while creating new snapshot "
# sleep 10m
# echo "waking...."
#rds-delete-db-instance lhdevices --force --skip-final-snapshot
rds-delete-db-instance lhdevices --force --final-db-snapshot-identifier dbStart
echo "Sleeping while instance is deleted"
sleep 10m
echo "waking...."
elif [[ $ACTION == 'reboot' ]]
then
echo "This will $ACTION a MySQL Instance"
rds-reboot-db-instance lhdevices ;
echo "Sleeping while Instance is rebooted"
sleep 5m
echo "waking...."
else
echo "Did not recognize command: $ACTION"
echo "Usage: $0 {MySQL-Instance Name} {Action either start, stop or reboot}"
fi
shopt -u nocasematch
Amazon recently updated their CLI to include a way to start and stop RDS instances. stop-db-instance and start-db-instance detail the steps needed to perform these operations.