Invoking AWS Lambda via Unsigned POST to REST API - amazon-web-services

I want to have a form on my Jekyll website that visitors can fill out, and the form action should POST to an AWS Lambda function. No JavaScript is allowed on the website, so the POST must not require signing.
I want the simplest possible setup, and do not need high security. If there is a way to avoid using AWS API Gateway to create an HTTP API, and somehow have the Lambda function directly receive the POST from the user's web browser, that would be perfect. If API Gateway is required, then the simplest solution would be best.
I want to use command line commands exclusively (not a web browser) to work with the AWS API. This allows for a scripted solution.
I've spent some time on the problem, and here is what I've got. I've marked questions in the deploy script with TODO. There is some extra code in that script which might not be needed. Problem is, I'm unsure what to delete because I just can't figure out how to provide the POST to the lambda.
The scripts use jq and yq so the bash scripts can parse JSON and YAML, respectively.
_config.yml
aws:
cloudfront:
distributionId: "" # Provide value if CloudFront is used on this site
lambda:
addSubscriber:
custom: # TODO change these values to suit your website
iamRoleName: lambda-ex
name: addSubscriberAwsLambdaSample
handler: addSubscriberAwsLambda.lambda_handler
runtime: python3.8
computed: # These values are computed by the _bin/awsLambda setup and deploy scripts
arn: arn:aws:lambda:us-east-1:031372724784:function:addSubscriberAwsLambdaSample:3
iamRoleArn: arn:aws:iam::031372724784:role/lambda-ex
utils source bash script
#!/bin/bash
function readYaml {
# $1 - path
yq r _config.yml "$1"
}
function writeYaml {
# $1 - path
# $2 - value
yq w -i _config.yml "$1" "$2"
}
# AWS Lambda values
export LAMBDA_IAM_ROLE_ARN="$( readYaml aws.lambda.addSubscriber.computed.iamRoleArn )"
export LAMBDA_NAME="$( readYaml aws.lambda.addSubscriber.custom.name )"
export LAMBDA_RUNTIME="$( readYaml aws.lambda.addSubscriber.custom.runtime )"
export LAMBDA_HANDLER="$( readYaml aws.lambda.addSubscriber.custom.handler )"
export LAMBDA_IAM_ROLE_NAME="$( readYaml aws.lambda.addSubscriber.custom.iamRoleName )"
export PACKAGE_DIR="${GIT_ROOT}/_package"
export LAMBDA_ZIP="${PACKAGE_DIR}/function.zip"
# Misc values
export TITLE="$( readYaml title )"
export URL="$( readYaml url )"
export DOMAIN="$( echo "$URL" | sed -n -e 's,^https\?://,,p' )"
setup bash script
#!/bin/bash
# Inspired by https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-awscli.html
SOURCE_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
GIT_ROOT="$( git rev-parse --show-toplevel )"
cd "${GIT_ROOT}"
source _bin/utils
# Define the execution role that gives an AWS Lambda function permission to access AWS resources.
read -r -d '' ROLE_POLICY_JSON <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
EOF
# If a role named $LAMBDA_IAM_ROLE_NAME is already defined then use it
ROLE_RESULT="$( aws iam get-role --role-name "$LAMBDA_IAM_ROLE_NAME" 2> /dev/null )"
if [ $? -ne 0 ]; then
ROLE_RESULT="$( aws iam create-role \
--role-name "$LAMBDA_IAM_ROLE_NAME" \
--assume-role-policy-document "$ROLE_POLICY_JSON"
)"
fi
LAMBDA_IAM_ROLE_ARN="$( jq -r .Role.Arn <<< "$ROLE_RESULT" )"
writeYaml aws.lambda.addSubscriber.computed.iamRoleArn "$LAMBDA_IAM_ROLE_ARN"
deploy bash script
# Call this script after the setup script has created the IAM role
# that gives the addSubscriber AWS Lambda function permission to access AWS resources
#
# 1) This script builds the AWS Lambda package and deploys it, with permissions.
# Any previous version of the AWS Lambda is deleted.
#
# 2) The newly (re)created AWS Lambda ARN is stored in _config.yml
#
# 3) An AWS Gateway HTTP API is created so static web pages can POST subscriber information to the AWS Lambda function.
# Because the web page is not allowed to have JavaScript, the POST is unsigned.
# *** The API must allow for an unsigned POST!!! ***
# Set cwd to the git project root
GIT_ROOT="$( git rev-parse --show-toplevel )"
cd "${GIT_ROOT}"
# Load configuration environment variables from _bin/utils:
# DOMAIN, LAMBDA_IAM_ROLE_ARN, LAMBDA_IAM_ROLE_NAME, LAMBDA_HANDLER, LAMBDA_NAME, LAMBDA_RUNTIME, LAMBDA_ZIP, PACKAGE_DIR, and URL
source _bin/utils
# Directory that this script resides in
SOURCE_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
echo "Building the AWS Lambda and packaging it into a zip file"
"$SOURCE_DIR/package" "$PACKAGE_DIR" > /dev/null
# Check to see if the Lambda function already exists.
LAMBDA="$( aws lambda list-functions | jq ".Functions[] | select(.FunctionName | contains(\"$LAMBDA_NAME\"))" )"
if [ -z "$LAMBDA" ]; then
echo "The AWS Lambda function '$LAMBDA_NAME' does not exist yet, so create it"
LAMBDA_METADATA="$( aws lambda create-function \
--description "Add subscriber to the MailChimp list with ID '$MC_LIST_ID_MSLINN' for the '$DOMAIN' website" \
--environment "{
\"Variables\": {
\"MC_API_KEY_MSLINN\": \"$MC_API_KEY_MSLINN\",
\"MC_LIST_ID_MSLINN\": \"$MC_LIST_ID_MSLINN\",
\"MC_USER_NAME_MSLINN\": \"$MC_USER_NAME_MSLINN\"
}
}" \
--function-name "$LAMBDA_NAME" \
--handler "$LAMBDA_HANDLER" \
--role "arn:aws:iam::${AWS_ACCOUNT_ID}:role/$LAMBDA_IAM_ROLE_NAME" \
--runtime "$LAMBDA_RUNTIME" \
--zip-file "fileb://$LAMBDA_ZIP" \
| jq -S .
)"
LAMBDA_ARN="$( jq -r .Configuration.FunctionArn <<< "$LAMBDA_METADATA" )"
else
echo "The AWS Lambda function '$LAMBDA_NAME' already exists, so update it"
LAMBDA_METADATA="$( aws lambda update-function-code \
--function-name "$LAMBDA_NAME" \
--publish \
--zip-file "fileb://$LAMBDA_ZIP" \
| jq -S .
)"
LAMBDA_ARN="$( jq -r .FunctionArn <<< "$LAMBDA_METADATA" )"
fi
echo "AWS Lambda ARN is $LAMBDA_ARN"
writeYaml aws.lambda.addSubscriber.computed.arn "$LAMBDA_ARN"
echo "Attach the AWSLambdaBasicExecutionRole managed policy to $LAMBDA_IAM_ROLE_NAME."
aws iam attach-role-policy \
--role-name $LAMBDA_IAM_ROLE_NAME \
--policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
#### Integrate with API Gateway for REST
#### Some or all of the following code is probably not required
GATEWAY_NAME="addSubscriberTo_$MC_LIST_ID_MSLINN"
API_GATEWAYS="$( aws apigateway get-rest-apis )"
if [ "$( jq ".items[] | select(.name | contains(\"$GATEWAY_NAME\"))" <<< "$API_GATEWAYS" )" ]; then
echo "API gateway '$GATEWAY_NAME' already exists."
else
echo "Creating API gateway '$GATEWAY_NAME'."
API_JSON="$( aws apigateway create-rest-api \
--name "$GATEWAY_NAME" \
--description "API for adding a subscriber to the Mailchimp list with ID '$MC_LIST_ID_MSLINN' for the '$DOMAIN' website"
)"
REST_API_ID="$( jq -r .id <<< "$API_JSON" )"
API_RESOURCES="$( aws apigateway get-resources --rest-api-id $REST_API_ID )"
ROOT_RESOURCE_ID="$( jq -r .items[0].id <<< "$API_RESOURCES" )"
NEW_RESOURCE="$( aws apigateway create-resource \
--rest-api-id "$REST_API_ID" \
--parent-id "$RESOURCE_ID" \
--path-part "{proxy+}"
)"
NEW_RESOURCE_ID=$( jq -r .id <<< $NEW_RESOURCE )
if false; then
# Is this step useful for any reason?
aws apigateway put-method \
--authorization-type "NONE" \
--http-method ANY \
--resource-id "$NEW_RESOURCE_ID" \
--rest-api-id "$REST_API_ID"
fi
# The following came from https://docs.aws.amazon.com/apigateway/latest/developerguide/set-up-lambda-proxy-integrations.html#set-up-lambda-proxy-integration-using-cli
# Instead of supplying an IAM role for --credentials, call the add-permission command to add resource-based permissions.
# I need an example of this.
# Alternatively, how to obtain IAM_ROLE_ID? Again, I need an example.
aws apigateway put-integration \
--credentials "arn:aws:iam::${IAM_ROLE_ID}:role/apigAwsProxyRole" \
--http-method ANY \
--integration-http-method POST \
--rest-api-id "$REST_API_ID" \
--resource-id "$NEW_RESOURCE_ID" \
--type AWS_PROXY \
--uri arn:aws:apigateway:`aws configure get region`:lambda:path/2015-03-31/functions/$LAMBDA_ARN
if [ "$LAMBDA_TEST"]; then
# Deploy the API to a test stage
aws apigateway create-deployment \
--rest-api-id "$REST_API_ID" \
--stage-name test
else
# Deploy the API live
aws apigateway create-deployment \
--rest-api-id "$REST_API_ID" \
--stage-name TODO_WhatNameGoesHere
fi
fi
echo "Check out the defined lambdas at https://console.aws.amazon.com/lambda/home?region=us-east-1#/functions"

Bash scripting infrastructure is bad. You might get it right eventually, but there are tools that make the process infinitely easier.
I prefer Terraform, and here's how an API Gateway + lambda would look like:
provider "aws" {
}
# lambda
resource "random_id" "id" {
byte_length = 8
}
data "archive_file" "lambda_zip" {
type = "zip"
output_path = "/tmp/lambda.zip"
source {
content = <<EOF
module.exports.handler = async (event, context) => {
// write the lambda code here
}
};
EOF
filename = "main.js"
}
}
resource "aws_lambda_function" "lambda" {
function_name = "${random_id.id.hex}-function"
filename = data.archive_file.lambda_zip.output_path
source_code_hash = data.archive_file.lambda_zip.output_base64sha256
handler = "main.handler"
runtime = "nodejs12.x"
role = aws_iam_role.lambda_exec.arn
}
data "aws_iam_policy_document" "lambda_exec_role_policy" {
statement {
actions = [
"logs:CreateLogStream",
"logs:PutLogEvents"
]
resources = [
"arn:aws:logs:*:*:*"
]
}
}
resource "aws_cloudwatch_log_group" "loggroup" {
name = "/aws/lambda/${aws_lambda_function.lambda.function_name}"
retention_in_days = 14
}
resource "aws_iam_role_policy" "lambda_exec_role" {
role = aws_iam_role.lambda_exec.id
policy = data.aws_iam_policy_document.lambda_exec_role_policy.json
}
resource "aws_iam_role" "lambda_exec" {
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Effect": "Allow"
}
]
}
EOF
}
# api gw
resource "aws_apigatewayv2_api" "api" {
name = "api-${random_id.id.hex}"
protocol_type = "HTTP"
target = aws_lambda_function.lambda.arn
}
resource "aws_lambda_permission" "apigw" {
action = "lambda:InvokeFunction"
function_name = aws_lambda_function.lambda.arn
principal = "apigateway.amazonaws.com"
source_arn = "${aws_apigatewayv2_api.api.execution_arn}/*/*"
}
output "domain" {
value = aws_apigatewayv2_api.api.api_endpoint
}
See that the last 2 resources are the API Gateway, all the previous ones are for the Lambda function.

Related

Cloning an AWS apigateway API from existing apigateway from CLI

I have to create many AWS apigateway apis. All apis will use a Lambda function for invocation. These new apis will also include below common steps.
API type as Regional and a REST API
Add a POST Method
Method Execution Settings
Invocation Type=Lambda Function and also choose respective Lambda
Function.
CORS Settings
Lambda Permissions
Integration Response Settings
Deploy API
Include stage in API Usageplan
Redeploy API
Here is apigateway clone API SYNOPSIS.
SYNOPSIS
create-rest-api
--name <value>
[--description <value>]
[--clone-from <value>]
[--binary-media-types <value>]
[--minimum-compression-size <value>]
[--api-key-source <value>]
[--endpoint-configuration <value>]
[--policy <value>]
[--api-version <value>]
[--cli-input-json <value>]
[--generate-cli-skeleton <value>]
How to clone an apigateway API from existing apigateway API from CLI and avoid doing all the repeated steps mentioned above.
You could clone from existing API when creating new API by the console as well.
Use below commands as a shell script and execute the script with the mentioned parameters. Parameters names are self explanatory.
Here is the full script and every part is explained.
#!/bin/bash
APINAME=${1}
STAGENAME=${2}
LAMBDANAME=${3}
CLONEAPIID=${4}
USAGEPLANID=${5}
AWS_PROFILE=[PROFILENAME]
AWS_REGION=[AWSREGION]
AWS_ACCOUNT=[AWSACCOUNT]
METHOD=POST
Clone API from existing API
echo "Closing API ${APINAME} from API ${CLONEAPIID}"
RESTAPIID=`aws apigateway create-rest-api --name "${APINAME}" --description "${APINAME}" --clone-from ${CLONEAPIID} --endpoint-configuration '{"types":["REGIONAL"]}' --profile ${AWS_PROFILE} | grep '"id"' | sed 's/,//g;s/ //g;s/"//g;' | awk -F: '{ print $2 }'`
Display New Rest API ID
echo RESTAPIID: ${RESTAPIID}
Getting Resource
echo "Getting Resource"
RESOURCEID=`aws apigateway get-resources --rest-api-id ${RESTAPIID} --profile ${AWS_PROFILE} | grep '"id"' | sed 's/,//g;s/ //g;s/"//g;' | awk -F: '{ print $2 }'`
echo RESOURCEID: ${RESOURCEID}
Setting URI and Lambda as Invocation
echo "Setting Lambda ${LAMBDANAME}"
LAMBDA_URL="arn:aws:apigateway:${AWS_REGION}:lambda:path/2015-03-31/functions/arn:aws:lambda:${AWS_REGION}:${AWS_ACCOUNT}:function:${LAMBDANAME}/invocations"
aws apigateway put-integration --rest-api-id ${RESTAPIID} --resource-id ${RESOURCEID} --http-method ${METHOD} --type AWS --integration-http-method ${METHOD} --uri "${LAMBDA_URL}" --profile ${AWS_PROFILE} | grep uri
Generating UUID as statement-id
SID=`uuidgen`
Adding permissions for API gateway to execute Lambda Function
aws lambda add-permission --function-name arn:aws:lambda:${AWS_REGION}:${AWS_ACCOUNT}:function:findPostcode --source-arn arn:aws:execute-api:${AWS_REGION}:${AWS_ACCOUNT}:${RESTAPIID}/*/*/* --principal apigateway.amazonaws.com --action lambda:InvokeFunction --statement-id ${SID} --profile ${AWS_PROFILE}
Setting Integration Response
aws apigateway put-integration-response --rest-api-id ${RESTAPIID} --resource-id ${RESOURCEID} --http-method ${METHOD} --status-code 200 --selection-pattern 200 --response-parameters '{"method.response.header.Access-Control-Allow-Origin": "'"'"'*'"'"'"}' --selection-pattern "" --response-templates '{"application/json": ""}' --profile ${AWS_PROFILE}
Creating Initial Deployment
echo "Creating Initial Deployment for ${APINAME} API and Stage ${STAGENAME}"
DEPLOYMENTID=`aws apigateway create-deployment --rest-api-id ${RESTAPIID} --stage-name '' --profile ${AWS_PROFILE} | grep '"id"' | sed 's/,//g;s/ //g;s/"//g;' | awk -F: '{ print $2 }'`
Creating Stage
aws apigateway create-stage --rest-api-id ${RESTAPIID} --stage-name ${STAGENAME} --description ${STAGENAME} --deployment-id ${DEPLOYMENTID} --profile ${AWS_PROFILE} | grep stageName
sleep 10
Adding API stage in Usageplan
echo "Adding Stage in Usageplan"
aws apigateway update-usage-plan --usage-plan-id ${USAGEPLANID} --patch-operations op="add",path="/apiStages",value="${RESTAPIID}:${STAGENAME}" --profile ${AWS_PROFILE} | grep name
sleep 10
Redeploying Stage
echo "Redeploying Stage"
aws apigateway create-deployment --rest-api-id ${RESTAPIID} --stage-name ${STAGENAME} --description ${STAGENAME} --profile ${AWS_PROFILE} | grep description
sleep 5
echo "REST API Endpoints configured and deployed successfully.."
Note: Proper time delay (wait) is needed in different steps ( as mentioned in seconds by sleep commands).
Here is an example of executing above shell script.(Assuming script name cloneapi.sh)
./cloneapi.sh MyAPI MyAPIStage MyLambdaFunction apxxxxx upxxxx
Where
MyAPI is New API Name
MyAPIStage is new API Stage Name
MyLambdaFunction is Lambda Function Name for New API
apxxxxx is the API ID (Cloning from)
upxxxx is Usage Plan ID
The above commands can be used with any AWS CLI version and on any Linux OS, but below is the CLI and OS version used.
aws --version
aws-cli/1.15.80 Python/2.7.14 Linux/4.14.94-89.73.amzn2.x86_64 botocore/1.10.79
cat /etc/os-release
NAME="Amazon Linux"
VERSION="2"
ID="amzn"
ID_LIKE="centos rhel fedora"
VERSION_ID="2"
PRETTY_NAME="Amazon Linux 2"
ANSI_COLOR="0;33"
CPE_NAME="cpe:2.3:o:amazon:amazon_linux:2"
HOME_URL="https://amazonlinux.com/"
From horse's mouth A.K.A AWS documentation
Export:
Go to API Gateway and click away as shown in the picture.
Creative Step:
Rename the API title: "title" and all the URI fields to something new "uri" in the exported JSON or YAML (depending which you selected)
Import
Just create new API and import what you had exported in the previous step.

Terraform unable to assume roles with MFA enabled

I'm having a terrible time getting Terraform to assume an IAM role with another account with MFA required. Here's my setup
AWS Config
[default]
region = us-west-2
output = json
[profile GEHC-000]
region = us-west-2
output = json
....
[profile GEHC-056]
source_profile = GEHC-000
role_arn = arn:aws:iam::~069:role/hc/hc-master
mfa_serial = arn:aws:iam::~183:mfa/username
external_id = ~069
AWS Credentials
[default]
aws_access_key_id = xxx
aws_secret_access_key = xxx
[GEHC-000]
aws_access_key_id = same as above
aws_secret_access_key = same as above
Policies assigned to IAM user
STS Policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AssumeRole",
"Effect": "Allow",
"Action": [
"sts:AssumeRole"
],
"Resource": [
"arn:aws:iam::*:role/hc/hc-master"
]
}
]
}
User Policy
{
"Statement": [
{
"Action": [
"iam:*AccessKey*",
"iam:*MFA*",
"iam:*SigningCertificate*",
"iam:UpdateLoginProfile*",
"iam:RemoveUserFromGroup*"
],
"Effect": "Allow",
"Resource": [
"arn:aws:iam::~183:mfa/${aws:username}",
"arn:aws:iam::~183:mfa/*/${aws:username}",
"arn:aws:iam::~183:mfa/*/*/${aws:username}",
"arn:aws:iam::~183:mfa/*/*/*${aws:username}",
"arn:aws:iam::~183:user/${aws:username}",
"arn:aws:iam::~183:user/*/${aws:username}",
"arn:aws:iam::~183:user/*/*/${aws:username}",
"arn:aws:iam::~183:user/*/*/*${aws:username}"
],
"Sid": "Write"
},
{
"Action": [
"iam:*Get*",
"iam:*List*"
],
"Effect": "Allow",
"Resource": [
"*"
],
"Sid": "Read"
},
{
"Action": [
"iam:CreateUser*",
"iam:UpdateUser*",
"iam:AddUserToGroup"
],
"Effect": "Allow",
"Resource": [
"*"
],
"Sid": "CreateUser"
}
],
"Version": "2012-10-17"
}
Force MFA Policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "BlockAnyAccessOtherThanAboveUnlessSignedInWithMFA",
"Effect": "Deny",
"NotAction": "iam:*",
"Resource": "*",
"Condition": {
"BoolIfExists": {
"aws:MultiFactorAuthPresent": "false"
}
}
}
]
}
main.tf
provider "aws" {
profile = "GEHC-056"
shared_credentials_file = "${pathexpand("~/.aws/config")}"
region = "${var.region}"
}
data "aws_iam_policy_document" "test" {
statement {
sid = "TestAssumeRole"
effect = "Allow"
actions = [
"sts:AssumeRole",
]
principals = {
type = "AWS"
identifiers = [
"arn:aws:iam::~183:role/hc-devops",
]
}
sid = "BuUserTrustDocument"
effect = "Allow"
principals = {
type = "Federated"
identifiers = [
"arn:aws:iam::~875:saml-provider/ge-saml-for-aws",
]
}
condition {
test = "StringEquals"
variable = "SAML:aud"
values = ["https://signin.aws.amazon.com/saml"]
}
}
}
resource "aws_iam_role" "test_role" {
name = "test_role"
path = "/"
assume_role_policy = "${data.aws_iam_policy_document.test.json}"
}
Get Caller Identity
bash-4.4$ aws --profile GEHC-056 sts get-caller-identity
Enter MFA code for arn:aws:iam::772660252183:mfa/503072343:
{
"UserId": "AROAIWCCLC2BGRPQMJC7U:botocore-session-1537474244",
"Account": "730993910069",
"Arn": "arn:aws:sts::730993910069:assumed-role/hc-master/botocore-session-1537474244"
}
And the error:
bash-4.4$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
Error: Error refreshing state: 1 error(s) occurred:
* provider.aws: Error creating AWS session: AssumeRoleTokenProviderNotSetError: assume role with MFA enabled, but AssumeRoleTokenProvider session option not set.
Terraform doesn't currently support prompting for the MFA token when being ran as it is intended to be ran in a less interactive fashion as much as possible and it would apparently require significant rework of the provider structure to support this interactive provider configuration. There's more discussion about this in this issue.
As also mentioned in that issue the best bet is to use some form of script/tool that already assumes the role prior to running Terraform.
I personally use AWS-Vault and have written a small shim shell script that I symlink to from terraform (and other things such as aws that I want to use AWS-Vault to grab credentials for) that detects what it's being called as, finds the "real" binary using which -a, and then uses AWS-Vault's exec to run the target command with the specified credentials.
My script looks like this:
#!/bin/bash
set -eo pipefail
# Provides a shim to override target executables so that it is executed through aws-vault
# See https://github.com/99designs/aws-vault/blob/ae56f73f630601fc36f0d68c9df19ac53e987369/USAGE.md#overriding-the-aws-cli-to-use-aws-vault for more information about using it for the AWS CLI.
# Work out what we're shimming and then find the non shim version so we can execute that.
# which -a returns a sorted list of the order of binaries that are on the PATH so we want the second one.
INVOKED=$(basename $0)
TARGET=$(which -a ${INVOKED} | tail -n +2 | head -n 1)
if [ -z ${AWS_VAULT} ]; then
AWS_PROFILE="${AWS_DEFAULT_PROFILE:-read-only}"
(>&2 echo "Using temporary credentials from ${AWS_PROFILE} profile...")
exec aws-vault exec "${AWS_PROFILE}" --assume-role-ttl=60m -- "${TARGET}" "$#"
else
# If AWS_VAULT is already set then we want to just use the existing session instead of nesting them
exec "${TARGET}" "$#"
fi
It will use a profile in your ~/.aws/config file that matches the AWS_DEFAULT_PROFILE environment variable you have set, defaulting to a read-only profile which may or may not be a useful default for you. This makes sure that AWS-Vault assumes the IAM role, grabs the credentials and sets them as environment variables for the target process.
This means that as far as Terraform is concerned it is being given credentials via environment variables and this just works.
One other way is to use credential_process in order to generate the credentials with a local script and cache the tokens in a new profile (let's call it tf_temp)
This script would :
check if the token is still valid for the profile tf_temp
if token is valid, extract the token from existing config using aws configure get xxx --profile tf_temp
if token is not valid, prompt use to enter mfa token
generate the session token with aws assume-role --token-code xxxx ... --profile your_profile
set the temporary profile token tf_temp using aws configure set xxx --profile tf_temp
You would have:
~/.aws/credentials
[prod]
aws_secret_access_key = redacted
aws_access_key_id = redacted
[tf_temp]
[tf]
credential_process = sh -c 'mfa.sh arn:aws:iam::{account_id}:role/{role} arn:aws:iam::{account_id}:mfa/{mfa_entry} prod 2> $(tty)'
mfa.sh
gist
move this script in /bin/mfa.sh or /usr/local/bin/mfa.sh :
#!/bin/sh
set -e
role=$1
mfa_arn=$2
profile=$3
temp_profile=tf_temp
if [ -z $role ]; then echo "no role specified"; exit 1; fi
if [ -z $mfa_arn ]; then echo "no mfa arn specified"; exit 1; fi
if [ -z $profile ]; then echo "no profile specified"; exit 1; fi
resp=$(aws sts get-caller-identity --profile $temp_profile | jq '.UserId')
if [ ! -z $resp ]; then
echo '{
"Version": 1,
"AccessKeyId": "'"$(aws configure get aws_access_key_id --profile $temp_profile)"'",
"SecretAccessKey": "'"$(aws configure get aws_secret_access_key --profile $temp_profile)"'",
"SessionToken": "'"$(aws configure get aws_session_token --profile $temp_profile)"'",
"Expiration": "'"$(aws configure get expiration --profile $temp_profile)"'"
}'
exit 0
fi
read -p "Enter MFA token: " mfa_token
if [ -z $mfa_token ]; then echo "MFA token can't be empty"; exit 1; fi
data=$(aws sts assume-role --role-arn $role \
--profile $profile \
--role-session-name "$(tr -dc A-Za-z0-9 </dev/urandom | head -c 20)" \
--serial-number $mfa_arn \
--token-code $mfa_token | jq '.Credentials')
aws_access_key_id=$(echo $data | jq -r '.AccessKeyId')
aws_secret_access_key=$(echo $data | jq -r '.SecretAccessKey')
aws_session_token=$(echo $data | jq -r '.SessionToken')
expiration=$(echo $data | jq -r '.Expiration')
aws configure set aws_access_key_id $aws_access_key_id --profile $temp_profile
aws configure set aws_secret_access_key $aws_secret_access_key --profile $temp_profile
aws configure set aws_session_token $aws_session_token --profile $temp_profile
aws configure set expiration $expiration --profile $temp_profile
echo '{
"Version": 1,
"AccessKeyId": "'"$aws_access_key_id"'",
"SecretAccessKey": "'"$aws_secret_access_key"'",
"SessionToken": "'"$aws_session_token"'",
"Expiration": "'"$expiration"'"
}'
Use the tf profile in provider settings. The first time, you will be prompted mfa token :
# terraform apply
Enter MFA token: 428313
This solution works fine with terraform and/or terragrunt
I've used a very simple, albeit perhaps dirty, solution to work around this:
First, let TF pick credentials from environment variables. Then:
AWS credentials file:
[access]
aws_access_key_id = ...
aws_secret_access_key = ...
region = ap-southeast-2
output = json
[target]
role_arn = arn:aws:iam::<target nnn>:role/admin
source_profile = access
mfa_serial = arn:aws:iam::<access nnn>:mfa/my-user
In console
CREDENTIAL=$(aws --profile target sts assume-role \
--role-arn arn:aws:iam::<target nnn>:role/admin --role-session-name TFsession \
--output text \
--query "Credentials.[AccessKeyId,SecretAccessKey,SessionToken,Expiration]")
<enter MFA>
#echo "CREDENTIAL: ${CREDENTIAL}"
export AWS_ACCESS_KEY_ID=$(echo ${CREDENTIAL} | cut -d ' ' -f 1)
export AWS_SECRET_ACCESS_KEY=$(echo ${CREDENTIAL} | cut -d ' ' -f 2)
export AWS_SESSION_TOKEN=$(echo ${CREDENTIAL} | cut -d ' ' -f 3)
terraform plan
UPDATE: a better solution is to use https://github.com/remind101/assume-role to achieve the same outcome.
I personally use aws-vault and it works very well with terraform while IAM MFA is enabled.
Install aws-vault: https://github.com/99designs/aws-vault.git
add(store) your AWS credential to the aws-vault: aws-vault add profilename
update your ~/.aws/config file, add "mfa_serial" and "role_arn" information.
[profile <profilename>]
region = <region>
mfa_serial = arn:aws:iam::<AWSAccountA>:mfa/<username>
role_arn = arn:aws:iam::<AWSAccountB>:role/<rolename>
use the following command to run terraform command:
$aws-vault exec <profilename> -- terraform apply
$<input your mfa code>
done.

What is the --rest-api-id and --resource-id and where do I find them?

I want to run this command: https://docs.aws.amazon.com/cli/latest/reference/apigateway/test-invoke-method.html
It requires these two fields, but I can't find any docs and where these are:
aws apigateway test-invoke-method --rest-api-id 1234123412 --resource-id avl5sg8fw8 --http-method GET --path-with-query-string '/'
My API gateway endpoint looks like this:
https://abc123.execute-api.us-east-1.amazonaws.com/MyStage/
I only see on unique identifier there - but this command seems to require two IDs. Where do I find them in the API Gateway console?
Your rest-api-id is the identifier before 'execute-api' in your endpoint URL.
In your example URL:
https://abc123.execute-api.us-east-1.amazonaws.com/MyStage/
The rest-api-id is abc123
The resource ID can be obtained with the CLI using the get-resources call and the rest-api-id:
> aws apigateway get-resources --rest-api-id abc123
{
"items": [
{
"id": "xxxx1",
"parentId": "xxxx0",
"pathPart": "foo",
"path": "/foo",
"resourceMethods": {
"GET": {}
}
},
{
"id": "xxxx0",
"path": "/"
}
]}
Each one of the records in the items attribute is a resource, and its id attribute is a resource ID you can use in your test-invoke-method in conjunction with a method that is associated with the resource.
Both values are visible at the top of the console when you select one of your endpoints/resources:
Here is a portion of a bash script that you could use, assuming you have jq installed to parse the json and only one API. It gets the first API ID from the items array, then looks up that Resource ID, then invokes the API using POST:
API_ID=`aws apigateway get-rest-apis | jq -r ".items[0].id"`
RESOURCE_ID=`aws apigateway get-resources --rest-api-id $API_ID | jq -r ".items[].id"`
aws apigateway test-invoke-method --rest-api-id $API_ID --resource-id $RESOURCE_ID --http-method POST

Is there a way to clone a Cloudformation stack in the same region?

I would like to clone a cloudformation stack in the same region. Is this possible today using the Cloudformation console?
I have a cloudformation template that takes in a big list of parameters. Many times I want to create an identical stack with just a different stack name. Is there a quick way of doing this using the AWS console?
I am thinking of something along the lines of "Launch more like this" option in EC2.
I use the following shell script
#!/bin/sh -x
# debug
if [ -z "$1" ] ; then
STACK=jirithhu-monitorStack-1ALS8UFQP3SRV
else
STACK="$1"
fi
set -e
# parameter 1: stack ID : (example: hhu-monitorStack-1ALS8UFQP3SRV )
aws cloudformation describe-stacks --stack-name $STACK | jq .Stacks[0].Parameters > /tmp/xx.$$.pars
aws cloudformation get-template --stack-name $STACK | jq -rc .TemplateBody > /tmp/xx.$$.body
NEWNAME=`echo "COPY-$STACK"`
aws cloudformation create-stack --stack-name $NEWNAME \
--template-body file:///tmp/xx.$$.body\
--parameters "`cat /tmp/xx.$$.pars`" \
--capabilities '[ "CAPABILITY_NAMED_IAM", "CAPABILITY_AUTO_EXPAND" ]'
rm /tmp/xx.$$*
you can create an input file and quickly launch a new stack.
Ex:
Command:
aws cloudformation create-stack --stackname startmyinstance
--template-body file://startmyinstance.json
--parameters file://startmyinstance-parameters.json
Parameters file:
[
{
"ParameterKey": "KeyPairName",
"ParameterValue": "MyKey"
},
{
"ParameterKey": "InstanceType",
"ParameterValue": "m1.micro"
}
]
Ref: https://aws.amazon.com/blogs/devops/passing-parameters-to-cloudformation-stacks-with-the-aws-cli-and-powershell/

Query EC2 tags from within instance

Amazon recently added the wonderful feature of tagging EC2 instances with key-value pairs to make management of large numbers of VMs a bit easier.
Is there some way to query these tags in the same way as some of the other user-set data? For example:
$ curl http://169.254.169.254/latest/meta-data/placement/availability-zone
us-east-1d
Is there some similar way to query the tags?
The following bash script returns the Name of your current ec2 instance (the value of the "Name" tag). Modify TAG_NAME to your specific case.
TAG_NAME="Name"
INSTANCE_ID="`wget -qO- http://instance-data/latest/meta-data/instance-id`"
REGION="`wget -qO- http://instance-data/latest/meta-data/placement/availability-zone | sed -e 's:\([0-9][0-9]*\)[a-z]*\$:\\1:'`"
TAG_VALUE="`aws ec2 describe-tags --filters "Name=resource-id,Values=$INSTANCE_ID" "Name=key,Values=$TAG_NAME" --region $REGION --output=text | cut -f5`"
To install the aws cli
sudo apt-get install python-pip -y
sudo pip install awscli
In case you use IAM instead of explicit credentials, use these IAM permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [ "ec2:DescribeTags"],
"Resource": ["*"]
}
]
}
Once you've got ec2-metadata and ec2-describe-tags installed (as mentioned in Ranieri's answer above), here's an example shell command to get the "name" of the current instance, assuming you have a "Name=Foo" tag on it.
Assumes EC2_PRIVATE_KEY and EC2_CERT environment variables are set.
ec2-describe-tags \
--filter "resource-type=instance" \
--filter "resource-id=$(ec2-metadata -i | cut -d ' ' -f2)" \
--filter "key=Name" | cut -f5
This returns Foo.
You can use a combination of the AWS metadata tool (to retrieve your instance ID) and the new Tag API to retrieve the tags for the current instance.
You can add this script to your cloud-init user data to download EC2 tags to a local file:
#!/bin/sh
INSTANCE_ID=`wget -qO- http://instance-data/latest/meta-data/instance-id`
REGION=`wget -qO- http://instance-data/latest/meta-data/placement/availability-zone | sed 's/.$//'`
aws ec2 describe-tags --region $REGION --filter "Name=resource-id,Values=$INSTANCE_ID" --output=text | sed -r 's/TAGS\t(.*)\t.*\t.*\t(.*)/\1="\2"/' > /etc/ec2-tags
You need the AWS CLI tools installed on your system: you can either install them with a packages section in a cloud-config file before the script, use an AMI that already includes them, or add an apt or yum command at the beginning of the script.
In order to access EC2 tags you need a policy like this one in your instance's IAM role:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1409309287000",
"Effect": "Allow",
"Action": [
"ec2:DescribeTags"
],
"Resource": [
"*"
]
}
]
}
The instance's EC2 tags will available in /etc/ec2-tags in this format:
FOO="Bar"
Name="EC2 tags with cloud-init"
You can include the file as-is in a shell script using . /etc/ec2-tags, for example:
#!/bin/sh
. /etc/ec2-tags
echo $Name
The tags are downloaded during instance initialization, so they will not reflect subsequent changes.
The script and IAM policy are based on itaifrenkel's answer.
If you are not in the default availability zone the results from overthink would return empty.
ec2-describe-tags \
--region \
$(curl -s http://169.254.169.254/latest/meta-data/placement/availability-zone | sed -e "s/.$//") \
--filter \
resource-id=$(curl --silent http://169.254.169.254/latest/meta-data/instance-id)
If you want to add a filter to get a specific tag (elasticbeanstalk:environment-name in my case) then you can do this.
ec2-describe-tags \
--region \
$(curl -s http://169.254.169.254/latest/meta-data/placement/availability-zone | sed -e "s/.$//") \
--filter \
resource-id=$(curl --silent http://169.254.169.254/latest/meta-data/instance-id) \
--filter \
key=elasticbeanstalk:environment-name | cut -f5
And to get only the value for the tag that I filtered on, we pipe to cut and get the fifth field.
ec2-describe-tags \
--region \
$(curl -s http://169.254.169.254/latest/meta-data/placement/availability-zone | sed -e "s/.$//") \
--filter \
resource-id=$(curl --silent http://169.254.169.254/latest/meta-data/instance-id) \
--filter \
key=elasticbeanstalk:environment-name | cut -f5
You can alternatively use the describe-instances cli call rather than describe-tags:
This example shows how to get the value of tag 'my-tag-name' for the instance:
aws ec2 describe-instances \
--instance-id $(curl -s http://169.254.169.254/latest/meta-data/instance-id) \
--query "Reservations[*].Instances[*].Tags[?Key=='my-tag-name'].Value" \
--region ap-southeast-2 --output text
Change the region to suit your local circumstances. This may be useful where your instance has the describe-instances privilege but not describe-tags in the instance profile policy
I have pieced together the following that is hopefully simpler and cleaner than some of the existing answers and uses only the AWS CLI and no additional tools.
This code example shows how to get the value of tag 'myTag' for the current EC2 instance:
Using describe-tags:
export AWS_DEFAULT_REGION=us-east-1
instance_id=$(curl -s http://169.254.169.254/latest/meta-data/instance-id)
aws ec2 describe-tags \
--filters "Name=resource-id,Values=$instance_id" 'Name=key,Values=myTag' \
--query 'Tags[].Value' --output text
Or, alternatively, using describe-instances:
aws ec2 describe-instances --instance-id $instance_id \
--query 'Reservations[].Instances[].Tags[?Key==`myTag`].Value' --output text
For Python:
from boto import utils, ec2
from os import environ
# import keys from os.env or use default (not secure)
aws_access_key_id = environ.get('AWS_ACCESS_KEY_ID', failobj='XXXXXXXXXXX')
aws_secret_access_key = environ.get('AWS_SECRET_ACCESS_KEY', failobj='XXXXXXXXXXXXXXXXXXXXX')
#load metadata , if = {} we are on localhost
# http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AESDG-chapter-instancedata.html
instance_metadata = utils.get_instance_metadata(timeout=0.5, num_retries=1)
region = instance_metadata['placement']['availability-zone'][:-1]
instance_id = instance_metadata['instance-id']
conn = ec2.connect_to_region(region, aws_access_key_id=aws_access_key_id, aws_secret_access_key=aws_secret_access_key)
# get tag status for our instance_id using filters
# http://docs.aws.amazon.com/AWSEC2/latest/CommandLineReference/ApiReference-cmd-DescribeTags.html
tags = conn.get_all_tags(filters={'resource-id': instance_id, 'key': 'status'})
if tags:
instance_status = tags[0].value
else:
instance_status = None
logging.error('no status tag for '+region+' '+instance_id)
A variation on some of the answers above but this is how I got the value of a specific tag from the user-data script on an instance
REGION=$(curl http://instance-data/latest/meta-data/placement/availability-zone | sed 's/.$//')
INSTANCE_ID=$(curl -s http://instance-data/latest/meta-data/instance-id)
TAG_VALUE=$(aws ec2 describe-tags --region $REGION --filters "Name=resource-id,Values=$INSTANCE_ID" "Name=key,Values='<TAG_NAME_HERE>'" | jq -r '.Tags[].Value')
Starting January 2022, this should be also available directly via ec2 metadata api (if enabled).
curl http://169.254.169.254/latest/meta-data/tags/instance
https://aws.amazon.com/about-aws/whats-new/2022/01/instance-tags-amazon-ec2-instance-metadata-service/
Using the AWS 'user data' and 'meta data' APIs its possible to write a script which wraps puppet to start a puppet run with a custom cert name.
First start an aws instance with custom user data: 'role:webserver'
#!/bin/bash
# Find the name from the user data passed in on instance creation
USER=$(curl -s "http://169.254.169.254/latest/user-data")
IFS=':' read -ra UDATA <<< "$USER"
# Find the instance ID from the meta data api
ID=$(curl -s "http://169.254.169.254/latest/meta-data/instance-id")
CERTNAME=${UDATA[1]}.$ID.aws
echo "Running Puppet for certname: " $CERTNAME
puppet agent -t --certname=$CERTNAME
This calls puppet with a certname like 'webserver.i-hfg453.aws' you can then create a node manifest called 'webserver' and puppets 'fuzzy node matching' will mean it is used to provision all webservers.
This example assumes you build on a base image with puppet installed etc.
Benefits:
1) You don't have to pass round your credentials
2) You can be as granular as you like with the role configs.
Jq + ec2metadata makes it a little nicer. I'm using cf and have access to the region. Otherwise you can grab it in bash.
aws ec2 describe-tags --region $REGION \
--filters "Name=resource-id,Values=`ec2metadata --instance-id`" | jq --raw-output \
'.Tags[] | select(.Key=="TAG_NAME") | .Value'
No jq.
aws ec2 describe-tags --region us-west-2 \
--filters "Name=resource-id,Values=`ec2-metadata --instance-id | cut -d " " -f 2`" \
--query 'Tags[?Key==`Name`].Value' \
--output text
Download and run a standalone executable to do that.
Sometimes one cannot install awscli that depends on python. docker might be out of the picture too.
Here is my implementation in golang:
https://github.com/hmalphettes/go-ec2-describe-tags
The Metadata tool seems to no longer be available, but that was an unnecessary dependency anyway.
Follow the AWS documentation to have the instance's profile grant it the "ec2:DescribeTags" action in a policy, restricting the target resources as much as you wish. (If you need a profile for another reason then you'll need to merge policies into a new profile-linked role).
Then:
aws --region $(curl -s http://169.254.169.254/latest/meta-data/placement/availability-zone | sed -e 's/.$//') ec2 describe-tags --filters Name=resource-type,Values=instance Name=resource-id,Values=$(curl http://169.254.169.254/latest/meta-data/instance-id) Name=key,Values=Name |
perl -nwe 'print "$1\n" if /"Value": "([^"]+)/;'
Well there are lots of good answers here but none quite worked for me exactly out of the box, I think the CLI has been updated since some of them and I do like using the CLI. The following single command works out of the box for me in 2021 (as long as the instance's IAM role is allowed to describe-tags).
aws ec2 describe-tags \
--region "$(ec2-metadata -z | cut -d' ' -f2 | sed 's/.$//')" \
--filters "Name=resource-id,Values=$(ec2-metadata --instance-id | cut -d " " -f 2)" \
--query 'Tags[?Key==`Name`].Value' \
--output text
AWS has recently announced support for instance tags in Instance Metadata Service: https://aws.amazon.com/about-aws/whats-new/2022/01/instance-tags-amazon-ec2-instance-metadata-service/
If you have have the tag metadata option enabled for an instance, you can simply do
$ TOKEN=`curl -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 900"`
$ curl -H "X-aws-ec2-metadata-token: $TOKEN" -v http://169.254.169.254/latest/meta-data/tags/instance
It is possible to get Instance tags from within the instance via metadata.
First, allow access to tags in instance metadata as explained here
Then, run this command for IMDSv1, Refer
curl http://169.254.169.254/latest/meta-data/tags/instance/Name
or this command for IMDSv2
TOKEN=`curl -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600"` \
&& curl -H "X-aws-ec2-metadata-token: $TOKEN" -v http://169.254.169.254/latest/meta-data/tags/instance
Install AWS CLI:
curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"
sudo apt-get install unzip
unzip awscli-bundle.zip
sudo ./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws
Get the tags for the current instance:
aws ec2 describe-tags --filters "Name=resource-id,Values=`ec2metadata --instance-id`"
Outputs:
{
"Tags": [
{
"ResourceType": "instance",
"ResourceId": "i-6a7e559d",
"Value": "Webserver",
"Key": "Name"
}
]
}
Use a bit of perl to extract the tags:
aws ec2 describe-tags --filters \
"Name=resource-id,Values=`ec2metadata --instance-id`" | \
perl -ne 'print "$1\n" if /\"Value\": \"(.*?)\"/'
Returns:
Webserver
For those crazy enough to use Fish shell on EC2, here's a handy snippet for your /home/ec2-user/.config/fish/config.fish. The hostdata command now will list all your tags as well as the public IP and hostname.
set -x INSTANCE_ID (wget -qO- http://instance-data/latest/meta-data/instance-id)
set -x REGION (wget -qO- http://instance-data/latest/meta-data/placement/availability-zone | sed 's/.$//')
function hostdata
aws ec2 describe-tags --region $REGION --filter "Name=resource-id,Values=$INSTANCE_ID" --output=text | sed -r 's/TAGS\t(.*)\t.*\t.*\t(.*)/\1="\2"/'
ec2-metadata | grep public-hostname
ec2-metadata | grep public-ipv4
end