How to Check if AWS Named Configure profile exists - amazon-web-services

How do I check if a named profile exists before I attempt to use it ?
aws cli will throw an ugly error if I attempt to use a non-existent profile, so I'd like to do something like this :
$(awsConfigurationExists "${profile_name}") && aws iam list-users --profile "${profile_name}" || echo "can't do it!"

Method 1 - Check entries in the .aws/config file
function awsConfigurationExists() {
local profile_name="${1}"
local profile_name_check=$(cat $HOME/.aws/config | grep "\[profile ${profile_name}]")
if [ -z "${profile_name_check}" ]; then
return 1
else
return 0
fi
}
Method 2 - Check results of aws configure list , see aws-cli issue #819
function awsConfigurationExists() {
local profile_name="${1}"
local profile_status=$( (aws configure --profile ${1} list) 2>&1)
if [[ $profile_status = *'could not be found'* ]]; then
return 1
else
return 0
fi
}
usage
$(awsConfigurationExists "my-aws-profile") && echo "does exist" || echo "does not exist"
or
if $(awsConfigurationExists "my-aws-profile"); then
echo "does exist"
else
echo "does not exist"
fi

I was stuck with the same problem and the proposed answer did not work for me.
Here is my solution with aws-cli/2.8.5 Python/3.9.11 Darwin/21.6.0 exe/x86_64 prompt/off:
export AWS_PROFILE=localstack
aws configure list-profiles | grep -q "${AWS_PROFILE}"
if [ $? -eq 0 ]; then
echo "AWS Profile [$AWS_PROFILE] already exists"
else
echo "Creating AWS Profile [$AWS_PROFILE]"
aws configure --profile $AWS_PROFILE set aws_access_key_id test
aws configure --profile $AWS_PROFILE set aws_secret_access_key test
fi

Related

Best method to renew periodically your AWS access keys

I realized I never renewed muy AWS access keys, and they are credentials that should be renewed periodically in order to avoid attacks.
So... which is the best way to renew them automatically without any impact, if they are used just form my laptop?
Finally I created this bash script:
#!/bin/bash
set -e # exit on non-zero command
set -u # force vars to be declared
set -o pipefail # avoids errors in pipelines to be masked
echo "retrieving current account id..."
current_access_key_list=$(aws iam list-access-keys | jq -r '.AccessKeyMetadata')
number_of_current_access_keys=$(echo $current_access_key_list| jq length)
current_access_key=$(echo $current_access_key_list | jq -r '.[]|.AccessKeyId')
if [[ ! "$number_of_current_access_keys" == "1" ]]; then
echo "ERROR: There already are more than 1 access key"
exit 1
fi
echo "Current access key is ${current_access_key}"
echo "creating a new access key..."
new_access_key=$(aws iam create-access-key)
access_key=$(echo $new_access_key| jq -r '.AccessKey.AccessKeyId')
access_key_secret=$(echo $new_access_key| jq -r '.AccessKey.SecretAccessKey')
echo "New access key is: ${access_key}"
echo "performing credentials backup..."
cp ~/.aws/credentials ~/.aws/credentials.bak
echo "changing local credentials..."
aws configure set aws_access_key_id "${access_key}"
aws configure set aws_secret_access_key "${access_key_secret}"
echo "wait 10 seconds to ensure new access_key is set..."
sleep 10
echo "check new credentials work fine"
aws iam get-user | jq -r '.User'
echo "removing old access key $current_access_key"
aws iam delete-access-key --access-key-id $current_access_key
echo "Congrats. You are using the new credentials."
echo "Feel free to remove the backup file:"
echo " rm ~/.aws/credentials.bak"
I placed that script into ~/.local/bin to ensure it is in the path, and then I added these lines at the end of my .bashrc and/or .zshrc files:
# rotate AWS keys if they are too old
if [[ -n "$(find ~/.aws -mtime +30 -name credentials)" ]]; then
AWS_PROFILE=profile-1 rotate_aws_access_key
AWS_PROFILE=profile-2 rotate_aws_access_key
fi
So any time I open a terminal (what is really frequently) it will check if the credentials file was not modified in more than one month and will try to renew my credentials automatically.
The worst thing that might happen is that it could create the new access key and not update my script, what should force me to remove it by hand.

aws s3 ls exact match

I am trying to verify that file AdHocQuery.js exists in s3-url using following command.
$ aws s3 ls s3://web-content-test/application/AdHocQuery.js --recursive --summarize
2013-06-11 20:25:12 1136257 AdHocQuery.js
2013-06-11 20:25:13 7524785 AdHocQuery.js.remove_oldvalue
but here, it is also returning .remove_oldvalue as well, which is kind of false-positive. I am looking for ways where I can check certain file exists without checking .masks.
This solution worked! Thank you for suggesting in comments!
object_exists=$(aws s3api head-object --bucket $bucket --key $key || true)
if [ -z "$object_exists" ]; then
echo "it does not exist"
else
echo "it exists"
fi

AWS CLI check if a lambda function exists

How do I do a one-time check if a lambda function exists via the CLI? I saw this function-exists option - https://docs.aws.amazon.com/cli/latest/reference/lambda/wait/function-exists.html
But it polls every second and returns a failure after 20 failed checks. I only want to check once and fail if it isn't found. Is there a way to do that?
You can check the exit code of get-function in bash. If the function does not exist, it returns exit code 255 else it returns 0 on success.
e.g.
aws lambda get-function --function-name my_lambda
echo $?
And you can use it like below:
(paste this in your terminal)
function does_lambda_exist() {
aws lambda get-function --function-name $1 > /dev/null 2>&1
if [ 0 -eq $? ]; then
echo "Lambda '$1' exists"
else
echo "Lambda '$1' does not exist"
fi
}
does_lambda_exist my_lambda_fn_name

How do I list all AWS S3 objects that are public?

I wanted to list all the objects that are in my s3 buckets that are public. Using the get-object-acl would list the grantees for a specific object so I was wondering if there are better options
Relying on get-object-acl is probably not what you want to do, because objects can be made public by means other than their ACL. At the very least, this is possible through both the object's ACL and also the bucket's policy (see e.g. https://havecamerawilltravel.com/photographer/how-allow-public-access-amazon-bucket/), and perhaps there are other means I don't know about.
A smarter test is to make a HEAD request to each object with no credentials. If you get a 200, it's public. If you get a 403, it's not.
The steps, then, are:
Get a list of buckets with the ListBuckets endpoint. From the CLI, this is:
aws2 s3api list-buckets
For each bucket, get its region and list its objects. From the CLI (assuming you've got credentials configured to use it), you can do these two things with these two commands, respsectively:
aws2 s3api get-bucket-location --bucket bucketnamehere
aws2 s3api list-objects --bucket bucketnamehere
For each object, make a HEAD request to a URL like
https://bucketname.s3.us-east-1.amazonaws.com/objectname
with bucketname, us-east-1, and objectname respectively replaced with your bucket name, the actual name of the bucket's region, and your object name.
To do this from the Unix command line with Curl, do
curl -I https://bucketname.s3.us-east-1.amazonaws.com/objectname
An example implementation of the logic above in Python using Boto 3 and Requests:
from typing import Iterator
import boto3
import requests
s3 = boto3.client('s3')
all_buckets = [
bucket_dict['Name'] for bucket_dict in
s3.list_buckets()['Buckets']
]
def list_objs(bucket: str) -> Iterator[str]:
"""
Generator yielding all object names in the bucket. Potentially requires
multiple requests for large buckets since list_objects is capped at 1000
objects returned per call.
"""
response = s3.list_objects_v2(Bucket=bucket)
while True:
if 'Contents' not in response:
# Happens if bucket is empty
return
for obj_dict in response['Contents']:
yield obj_dict['Key']
last_key = obj_dict['Key']
if response['IsTruncated']:
response = s3.list_objects_v2(Bucket=bucket, StartAfter=last_key)
else:
return
def is_public(bucket: str, region: str, obj: str) -> bool:
url = f'https://{bucket}.s3.{region}.amazonaws.com/{obj}'
resp = requests.head(url)
if resp.status_code == 200:
return True
elif resp.status_code == 403:
return False
else:
raise Exception(f'Unexpected HTTP code {resp.status_code} from {url}')
for bucket in all_buckets:
region = s3.get_bucket_location(Bucket=bucket)['LocationConstraint']
for obj in list_objs(bucket):
if is_public(bucket, region, obj):
print(f'{bucket}/{obj} is public')
Be aware that this takes about a second per object, which is... not ideal, if you have a lot of stuff in S3. I don't know of a faster alternative, though.
After some time spending with AWS CLI can tell you that the best approach for that is to sync, mv or cp files with permissions under structured prefixes
Permission – Specifies the granted permissions, and can be set to read, readacl, writeacl, or full.
For example aws s3 sync . s3://my-bucket/path --acl public-read
Then under needed prefix list all those objects.
Put the name of the bucket or list of buckets into "buckets.list" file & run the bash script below.
The script supports unlimited(!) number of objects as it uses pagination.
#!/bin/bash
MAX_ITEMS=100
PAGE_SIZE=100
for BUCKET in $(cat buckets.list);
do
OBJECTS=$(aws s3api list-objects-v2 --bucket $BUCKET --max-items=$MAX_ITEMS --page-size=$PAGE_SIZE 2>/dev/null)
e1=$?
if [[ "OBJECTS" =~ "Could not connect to the endpoint URL" ]]; then
echo "Could not connect to the endpoint URL!"
echo -e "$BUCKET" "$OBJECT" "Could not connect to the endpoint URL" >> errors.log
fi
NEXT_TOKEN=$(echo $OBJECTS | jq -r '.NextToken')
while [[ "$NEXT_TOKEN" != "" ]]
do
OBJECTS=$(aws s3api list-objects-v2 --bucket $BUCKET --max-items=$MAX_ITEMS --page-size=$PAGE_SIZE --starting-token $NEXT_TOKEN | jq -r '.Contents | .[].Key' 2>/dev/null)
for OBJECT in $OBJECTS;
do
ACL=$(aws s3api get-object-acl --bucket $BUCKET --key $OBJECT --query "Grants[?Grantee.URI=='http://acs.amazonaws.com/groups/global/AllUsers']" --output=text 2>/dev/null)
e2=$?
if [[ "$ACL" =~ "Could not connect to the endpoint URL" ]]; then
echo "Could not connect to the endpoint URL!"
echo -e "$BUCKET" "$OBJECT" "Could not connect to the endpoint URL" >> errors.log
fi
if [[ ! "$ACL" == "" ]] && [[ $e1 == 0 ]] && [[ $e2 == 0 ]]; then
echo -e "$BUCKET" "$OBJECT" "Public object!!!" "$ACL"
echo -e "$BUCKET" "$OBJECT" "$ACL" >> public-objects.log
else
echo -e "$BUCKET" "$OBJECT" "not public"
fi
done
done
done

Exporting DNS zonefile from Amazon Route 53

I would like to export a DNS zonefile from my Amazon Route 53 setup. Is this possible, or can zonefiles only be created manually? (e.g. through http://www.zonefile.org/?lang=en)
The following script exports zone details in bind format from Route53. Pass over the domain name as a parameter to script. (This required awscli and jq to be installed and configured.)
#!/bin/bash
zonename=$1
hostedzoneid=$(aws route53 list-hosted-zones --output json | jq -r ".HostedZones[] | select(.Name == \"$zonename.\") | .Id" | cut -d'/' -f3)
aws route53 list-resource-record-sets --hosted-zone-id $hostedzoneid --output json | jq -jr '.ResourceRecordSets[] | "\(.Name) \t\(.TTL) \t\(.Type) \t\(.ResourceRecords[]?.Value)\n"'
It's not possible yet. You'll have to use the API's ListResourceRecordSets and build the zonefile yourself.
As stated in the comment, the cli53 is a great tool to interact with Route 53 using the command line interface.
First, configure your account keys in ~/.aws/config file:
[default]
aws_access_key_id = AK.....ZP
aws_secret_access_key = 8j.....M0
Then, use the export command:
$ cli53 export --full --debug example.com > example.com.zone 2> example.com.zone.log
Verify the example.com.zone file after export to make sure that everything is exported correctly.
You can import the zone lately:
$ cli53 import --file ./example.com.zone example.com
And if you want to transfer the Route53 zone from one AWS account to another, you can use the profile option. Just add two named accounts to the ~/.aws/config file and reference them with the profile property during export and import. You can even pipe these two commands.
You can export a JSON file:
aws route53 list-resource-record-sets --hosted-zone-id <zone-id-here> --output json > route53-records.json
You can export with aws api
aws route53 list-resource-record-sets --hosted-zone-id YOUR_ZONE_ID
Exporting and importing is possible with https://github.com/RisingOak/route53-transfer
Based on #szentmarjay's answer above, except it shows usage and supports zone_id or zone_name. This is my fave because it's standard old school bind format, so other tools can do stuff with it.
#!/bin/bash
# r53_export
usage() {
local cmd=$(basename "$0")
echo -e >&2 "\nUsage: $cmd {--id ZONE_ID|--domain ZONE_NAME}\n"
exit 1
}
while [[ $1 ]]; do
if [[ $1 == --id ]]; then shift; zone_id="$1"
elif [[ $1 == --domain ]]; then shift; zone_name="$1"
else usage
fi
shift
done
if [[ $zone_name ]]; then
zone_id=$(
aws route53 list-hosted-zones --output json \
| jq -r ".HostedZones[] | select(.Name == \"$zone_name.\") | .Id" \
| head -n1 \
| cut -d/ -f3
)
echo >&2 "+ Found zone id: '$zone_id'"
fi
[[ $zone_id ]] || usage
aws route53 list-resource-record-sets --hosted-zone-id $zone_id --output json \
| jq -jr '.ResourceRecordSets[] | "\(.Name) \t\(.TTL) \t\(.Type) \t\(.ResourceRecords[]?.Value)\n"'