Terraform unable to assume roles with MFA enabled - amazon-web-services

I'm having a terrible time getting Terraform to assume an IAM role with another account with MFA required. Here's my setup
AWS Config
[default]
region = us-west-2
output = json
[profile GEHC-000]
region = us-west-2
output = json
....
[profile GEHC-056]
source_profile = GEHC-000
role_arn = arn:aws:iam::~069:role/hc/hc-master
mfa_serial = arn:aws:iam::~183:mfa/username
external_id = ~069
AWS Credentials
[default]
aws_access_key_id = xxx
aws_secret_access_key = xxx
[GEHC-000]
aws_access_key_id = same as above
aws_secret_access_key = same as above
Policies assigned to IAM user
STS Policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AssumeRole",
"Effect": "Allow",
"Action": [
"sts:AssumeRole"
],
"Resource": [
"arn:aws:iam::*:role/hc/hc-master"
]
}
]
}
User Policy
{
"Statement": [
{
"Action": [
"iam:*AccessKey*",
"iam:*MFA*",
"iam:*SigningCertificate*",
"iam:UpdateLoginProfile*",
"iam:RemoveUserFromGroup*"
],
"Effect": "Allow",
"Resource": [
"arn:aws:iam::~183:mfa/${aws:username}",
"arn:aws:iam::~183:mfa/*/${aws:username}",
"arn:aws:iam::~183:mfa/*/*/${aws:username}",
"arn:aws:iam::~183:mfa/*/*/*${aws:username}",
"arn:aws:iam::~183:user/${aws:username}",
"arn:aws:iam::~183:user/*/${aws:username}",
"arn:aws:iam::~183:user/*/*/${aws:username}",
"arn:aws:iam::~183:user/*/*/*${aws:username}"
],
"Sid": "Write"
},
{
"Action": [
"iam:*Get*",
"iam:*List*"
],
"Effect": "Allow",
"Resource": [
"*"
],
"Sid": "Read"
},
{
"Action": [
"iam:CreateUser*",
"iam:UpdateUser*",
"iam:AddUserToGroup"
],
"Effect": "Allow",
"Resource": [
"*"
],
"Sid": "CreateUser"
}
],
"Version": "2012-10-17"
}
Force MFA Policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "BlockAnyAccessOtherThanAboveUnlessSignedInWithMFA",
"Effect": "Deny",
"NotAction": "iam:*",
"Resource": "*",
"Condition": {
"BoolIfExists": {
"aws:MultiFactorAuthPresent": "false"
}
}
}
]
}
main.tf
provider "aws" {
profile = "GEHC-056"
shared_credentials_file = "${pathexpand("~/.aws/config")}"
region = "${var.region}"
}
data "aws_iam_policy_document" "test" {
statement {
sid = "TestAssumeRole"
effect = "Allow"
actions = [
"sts:AssumeRole",
]
principals = {
type = "AWS"
identifiers = [
"arn:aws:iam::~183:role/hc-devops",
]
}
sid = "BuUserTrustDocument"
effect = "Allow"
principals = {
type = "Federated"
identifiers = [
"arn:aws:iam::~875:saml-provider/ge-saml-for-aws",
]
}
condition {
test = "StringEquals"
variable = "SAML:aud"
values = ["https://signin.aws.amazon.com/saml"]
}
}
}
resource "aws_iam_role" "test_role" {
name = "test_role"
path = "/"
assume_role_policy = "${data.aws_iam_policy_document.test.json}"
}
Get Caller Identity
bash-4.4$ aws --profile GEHC-056 sts get-caller-identity
Enter MFA code for arn:aws:iam::772660252183:mfa/503072343:
{
"UserId": "AROAIWCCLC2BGRPQMJC7U:botocore-session-1537474244",
"Account": "730993910069",
"Arn": "arn:aws:sts::730993910069:assumed-role/hc-master/botocore-session-1537474244"
}
And the error:
bash-4.4$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
Error: Error refreshing state: 1 error(s) occurred:
* provider.aws: Error creating AWS session: AssumeRoleTokenProviderNotSetError: assume role with MFA enabled, but AssumeRoleTokenProvider session option not set.

Terraform doesn't currently support prompting for the MFA token when being ran as it is intended to be ran in a less interactive fashion as much as possible and it would apparently require significant rework of the provider structure to support this interactive provider configuration. There's more discussion about this in this issue.
As also mentioned in that issue the best bet is to use some form of script/tool that already assumes the role prior to running Terraform.
I personally use AWS-Vault and have written a small shim shell script that I symlink to from terraform (and other things such as aws that I want to use AWS-Vault to grab credentials for) that detects what it's being called as, finds the "real" binary using which -a, and then uses AWS-Vault's exec to run the target command with the specified credentials.
My script looks like this:
#!/bin/bash
set -eo pipefail
# Provides a shim to override target executables so that it is executed through aws-vault
# See https://github.com/99designs/aws-vault/blob/ae56f73f630601fc36f0d68c9df19ac53e987369/USAGE.md#overriding-the-aws-cli-to-use-aws-vault for more information about using it for the AWS CLI.
# Work out what we're shimming and then find the non shim version so we can execute that.
# which -a returns a sorted list of the order of binaries that are on the PATH so we want the second one.
INVOKED=$(basename $0)
TARGET=$(which -a ${INVOKED} | tail -n +2 | head -n 1)
if [ -z ${AWS_VAULT} ]; then
AWS_PROFILE="${AWS_DEFAULT_PROFILE:-read-only}"
(>&2 echo "Using temporary credentials from ${AWS_PROFILE} profile...")
exec aws-vault exec "${AWS_PROFILE}" --assume-role-ttl=60m -- "${TARGET}" "$#"
else
# If AWS_VAULT is already set then we want to just use the existing session instead of nesting them
exec "${TARGET}" "$#"
fi
It will use a profile in your ~/.aws/config file that matches the AWS_DEFAULT_PROFILE environment variable you have set, defaulting to a read-only profile which may or may not be a useful default for you. This makes sure that AWS-Vault assumes the IAM role, grabs the credentials and sets them as environment variables for the target process.
This means that as far as Terraform is concerned it is being given credentials via environment variables and this just works.

One other way is to use credential_process in order to generate the credentials with a local script and cache the tokens in a new profile (let's call it tf_temp)
This script would :
check if the token is still valid for the profile tf_temp
if token is valid, extract the token from existing config using aws configure get xxx --profile tf_temp
if token is not valid, prompt use to enter mfa token
generate the session token with aws assume-role --token-code xxxx ... --profile your_profile
set the temporary profile token tf_temp using aws configure set xxx --profile tf_temp
You would have:
~/.aws/credentials
[prod]
aws_secret_access_key = redacted
aws_access_key_id = redacted
[tf_temp]
[tf]
credential_process = sh -c 'mfa.sh arn:aws:iam::{account_id}:role/{role} arn:aws:iam::{account_id}:mfa/{mfa_entry} prod 2> $(tty)'
mfa.sh
gist
move this script in /bin/mfa.sh or /usr/local/bin/mfa.sh :
#!/bin/sh
set -e
role=$1
mfa_arn=$2
profile=$3
temp_profile=tf_temp
if [ -z $role ]; then echo "no role specified"; exit 1; fi
if [ -z $mfa_arn ]; then echo "no mfa arn specified"; exit 1; fi
if [ -z $profile ]; then echo "no profile specified"; exit 1; fi
resp=$(aws sts get-caller-identity --profile $temp_profile | jq '.UserId')
if [ ! -z $resp ]; then
echo '{
"Version": 1,
"AccessKeyId": "'"$(aws configure get aws_access_key_id --profile $temp_profile)"'",
"SecretAccessKey": "'"$(aws configure get aws_secret_access_key --profile $temp_profile)"'",
"SessionToken": "'"$(aws configure get aws_session_token --profile $temp_profile)"'",
"Expiration": "'"$(aws configure get expiration --profile $temp_profile)"'"
}'
exit 0
fi
read -p "Enter MFA token: " mfa_token
if [ -z $mfa_token ]; then echo "MFA token can't be empty"; exit 1; fi
data=$(aws sts assume-role --role-arn $role \
--profile $profile \
--role-session-name "$(tr -dc A-Za-z0-9 </dev/urandom | head -c 20)" \
--serial-number $mfa_arn \
--token-code $mfa_token | jq '.Credentials')
aws_access_key_id=$(echo $data | jq -r '.AccessKeyId')
aws_secret_access_key=$(echo $data | jq -r '.SecretAccessKey')
aws_session_token=$(echo $data | jq -r '.SessionToken')
expiration=$(echo $data | jq -r '.Expiration')
aws configure set aws_access_key_id $aws_access_key_id --profile $temp_profile
aws configure set aws_secret_access_key $aws_secret_access_key --profile $temp_profile
aws configure set aws_session_token $aws_session_token --profile $temp_profile
aws configure set expiration $expiration --profile $temp_profile
echo '{
"Version": 1,
"AccessKeyId": "'"$aws_access_key_id"'",
"SecretAccessKey": "'"$aws_secret_access_key"'",
"SessionToken": "'"$aws_session_token"'",
"Expiration": "'"$expiration"'"
}'
Use the tf profile in provider settings. The first time, you will be prompted mfa token :
# terraform apply
Enter MFA token: 428313
This solution works fine with terraform and/or terragrunt

I've used a very simple, albeit perhaps dirty, solution to work around this:
First, let TF pick credentials from environment variables. Then:
AWS credentials file:
[access]
aws_access_key_id = ...
aws_secret_access_key = ...
region = ap-southeast-2
output = json
[target]
role_arn = arn:aws:iam::<target nnn>:role/admin
source_profile = access
mfa_serial = arn:aws:iam::<access nnn>:mfa/my-user
In console
CREDENTIAL=$(aws --profile target sts assume-role \
--role-arn arn:aws:iam::<target nnn>:role/admin --role-session-name TFsession \
--output text \
--query "Credentials.[AccessKeyId,SecretAccessKey,SessionToken,Expiration]")
<enter MFA>
#echo "CREDENTIAL: ${CREDENTIAL}"
export AWS_ACCESS_KEY_ID=$(echo ${CREDENTIAL} | cut -d ' ' -f 1)
export AWS_SECRET_ACCESS_KEY=$(echo ${CREDENTIAL} | cut -d ' ' -f 2)
export AWS_SESSION_TOKEN=$(echo ${CREDENTIAL} | cut -d ' ' -f 3)
terraform plan
UPDATE: a better solution is to use https://github.com/remind101/assume-role to achieve the same outcome.

I personally use aws-vault and it works very well with terraform while IAM MFA is enabled.
Install aws-vault: https://github.com/99designs/aws-vault.git
add(store) your AWS credential to the aws-vault: aws-vault add profilename
update your ~/.aws/config file, add "mfa_serial" and "role_arn" information.
[profile <profilename>]
region = <region>
mfa_serial = arn:aws:iam::<AWSAccountA>:mfa/<username>
role_arn = arn:aws:iam::<AWSAccountB>:role/<rolename>
use the following command to run terraform command:
$aws-vault exec <profilename> -- terraform apply
$<input your mfa code>
done.

Related

"eksctl create cluster" command is not working

When executing this command,I get this error:
C:\WINDOWS\system32>eksctl create cluster --name eksctl-demo --profile myAdmin2
Error: checking AWS STS access – cannot get role ARN for current session: operation error STS: GetCallerIdentity, failed to sign request: failed to retrieve credentials: failed to refresh cached credentials, no EC2 IMDS role found, operation error ec2imds: GetMetadata, request send failed, Get "http://169.254.169.254/latest/meta-data/iam/security-credentials/": dial tcp 169.254.169.254:80: i/o timeout
myAdmin2 IAM users credientials are set up as follows:
Credentials file:
[myAdmin2]
aws_access_key_id = ******************
aws_secret_access_key = ********************
config file:
[profile myAdmin2]
region = us-east-2
output = json
myAdmin2 has access to the console:
C:\WINDOWS\system32>aws iam list-users --profile myAdmin2
{
"Users": [
{
"Path": "/",
"UserName": "myAdmin",
"UserId": "AIDAYYPFV776ELVEJ5ZVQ",
"Arn": "arn:aws:iam::602313981948:user/myAdmin",
"CreateDate": "2022-09-30T19:08:08+00:00"
},
{
"Path": "/",
"UserName": "myAdmin2",
"UserId": "AIDAYYPFV776LEDK2PCCI",
"Arn": "arn:aws:iam::602313981948:user/myAdmin2",
"CreateDate": "2022-09-30T21:39:33+00:00"
}
]
}
I had problems working with myAdmin that's why I created a new IAM user called myAdmin2.
myAdmin2 is granted AdministratorAccess permission:
As shown in this image
aws cli version installed:
C:\WINDOWS\system32>aws --version
aws-cli/2.7.35 Python/3.9.11 Windows/10 exe/AMD64 prompt/off
My Env variables:
C:\WINDOWS\system32>set
AWS_ACCESS_KEY_ID= ***********the same as I have in credentials file
AWS_CONFIG_FILE=~/.aws/config
AWS_DEFAULT_PROFILE=myAdmin2
AWS_DEFAULT_REGION=us-east-2
AWS_PROFILE=myAdmin2
AWS_SECRET_ACCESS_KEY=****************the same as I have in credentials file
AWS_SHARED_CREDENTIALS_FILE=~/.aws/credentials
I think those are all the necessary things I have to mention. If someone can help, please. I can't move on with this error!!
It worked finally! everything was well configured, I just had to reboot my laptop and it resolved the issue!

AWS CodeBuild terraform init "no valid credential sources for S3 Backend found."

Terraform 1.2.7
I have AWS CodeBuild which is assuming role devops
resource "aws_codebuild_project" "code_build" {
name = "${var.app_name}-${var.target_env}-${var.build_project}"
description = "${var.app_name} ${var.build_project} pipeline on ${var.target_env}"
service_role = arn:aws:iam::xxxxx:role/devops
....
devops role has a policy that allows to be assumed by codebuild service and comes with admin privileges
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "codebuild.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
Admin privileges
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "*",
"Resource": "*"
}
]
}
Terraform backend config
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~>3.0"
}
}
backend "s3" {
bucket = "tfstatebucket"
key = "infrastructure/terraform.tfstate"
region = "eu-central-1"
role_arn = "arn:aws:iam::xxxx:role/devops"
dynamodb_table = "cdc-terraform-up-and-running-lock"
encrypt = true
}
}
Buildspec.yaml
version: 0.2
phases:
install:
commands:
- |
if [ -n "$INSTALL_TOOLS_SCRIPT" ]; then
./$INSTALL_TOOLS_SCRIPT
fi
pre_build:
commands:
- |
if [ -n "$PREBUILD_SCRIPT" ]; then
./$PREBUILD_SCRIPT
fi
build:
commands:
- |
if [ -n "$BUILD_SCRIPT" ]; then
./$BUILD_SCRIPT
fi
post_build:
commands:
- |
if [ -n "$POSTBUILD_SCRIPT" ]; then
./$POSTBUILD_SCRIPT
fi
install_tools.sh
#!/bin/bash
set -euxo pipefail
function install_terraform() {
echo Installing Terraform...
curl -s -qL -o terraform_install.zip https://releases.hashicorp.com/terraform/1.2.7/terraform_1.2.7_linux_amd64.zip
unzip terraform_install.zip -d /usr/bin/
chmod +x /usr/bin/terraform
terraform --version
}
install_terraform
prebuild.sh
set -euxo pipefail
echo Terraform init...
terraform init
I am able to succesfully assume the role and terraform init/plan/apply on local machine, but it fails on CodeBuild
Update
I've removed roleArn from backend config and provider and the issue still persists
Updated Terraform backend config
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~>3.0"
}
}
backend "s3" {
bucket = "tfstatebucket"
key = "infrastructure/terraform.tfstate"
region = "eu-central-1"
dynamodb_table = "cdc-terraform-up-and-running-lock"
encrypt = true
}
}
Updated provider
provider "aws" {
region = lookup(var.aws_region, var.env)
allowed_account_ids = [
lookup(var.account_id, var.env),
]
}
UPDATE
For the sake of it I have manually attempted assume-role via cli succesfully. However, terraform init still fails, complaining about wrong session token. How is that even possible?
Log Output
+ aws sts assume-role --role-arn arn:aws:iam::***:role/devops --role-session-name codebuild
+ cat creds
{
"Credentials": {
"AccessKeyId": "***",
"SecretAccessKey": "***",
"SessionToken": "***",
"Expiration": "2022-08-23T21:47:47+00:00"
},
"AssumedRoleUser": {
"AssumedRoleId": "***:codebuild",
"Arn": "arn:aws:sts::***:assumed-role/devops/codebuild"
}
}
++ jq .Credentials.AccessKeyId
+ export 'AWS_ACCESS_KEY_ID="*"'
+ AWS_ACCESS_KEY_ID='"*"'
++ jq .Credentials.SecretAccessKey
+ export 'AWS_SECRET_ACCESS_KEY="*"'
+ AWS_SECRET_ACCESS_KEY='"*"'
++ jq .Credentials.SessionToken
+ export 'AWS_SESSION_TOKEN="*"'
+ AWS_SESSION_TOKEN='"*"'
+ terraform init
Initializing modules...
- aws_appautoscaling_ecs_consumer_target in tfmodules/autoscaling
- aws_appautoscaling_ecs_server_target in tfmodules/autoscaling
- aws_appautoscaling_ecs_websocket_server_target in tfmodules/autoscaling
Initializing the backend...
╷
│ Error: error configuring S3 Backend: error validating provider credentials: error calling sts:GetCallerIdentity: InvalidClientTokenId: The security token included in the request is invalid.
│ status code: 403, request id: 87aedbae-938c-4019-a82a-47a53dfe06f5
│
Associated code with the above output
aws sts assume-role --role-arn arn:aws:iam::***:role/devops --role-session-name codebuild > creds
cat creds
export AWS_ACCESS_KEY_ID=$(cat creds | jq '.Credentials.AccessKeyId')
export AWS_SECRET_ACCESS_KEY=$(cat creds | jq '.Credentials.SecretAccessKey')
export AWS_SESSION_TOKEN=$(cat creds | jq '.Credentials.SessionToken')
I've managed to find the issue.
Internally aws sdk is querying the credentials endpoint 169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI for credentials. Because my CodeBuild instance was configured to run in private VPC, it had to go through corporate proxy for external resources. In no_proxy/NO_PROXY config only the instance metadata IP (169.254.169.254) was whitelisted. Whitelisting 169.254.170.2 in my proxy configuration solved the problem.

How to securely allow access to AWS Secrets Manager with Terraform and cloud-init

I have the situation whereby I am having Terraform create a random password and store it into AWS Secrets Manager.
My Terraform password and secrets manager config:
resource "random_password" "my_password" {
length = 16
lower = true
upper = true
number = true
special = true
override_special = "##$%"
}
resource "aws_secretsmanager_secret" "my_password_secret" {
name = "/development/my_password"
}
resource "aws_secretsmanager_secret_version" "my_password_secret_version" {
secret_id = aws_secretsmanager_secret.my_password_secret.id
secret_string = random_password.my_password.result
}
The above works well. However I am not clear on how to achieve my final goal...
I have an AWS EC2 Instance which is also configured via Terraform, when the system boots it executes some cloud-init config which runs a setup script (Bash script). The Bash setup script needs to install some server software and set a password for that server software. I am not certain how to securely access my_password from that Bash script during setup.
My Terraform config for the instance and cloud-init config:
resource "aws_instance" "my_instance_1" {
ami = data.aws_ami.amazon_linux_2.id
instance_type = "m5a.2xlarge"
user_data = data.cloudinit_config.my_instance_1.rendered
...
}
data "cloudinit_config" "my_instance_1" {
gzip = true
base64_encode = true
part {
content_type = "text/x-shellscript"
filename = "setup-script.sh"
content = <<EOF
#!/usr/bin/env bash
my_password=`<MY PASSWORD IS NEEDED HERE>` # TODO retrieve via cURL call to Secrets Manager API?
server_password=$my_password /opt/srv/bin/install.sh
EOF
}
}
I need to be able to securely retrieve the password from the AWS Secrets Manager when the cloud-init script runs, as I have read that embedding it in the bash script is considered insecure.
I have also read that AWS has the notion of Temporary Credentials, and that these can be associated with an EC2 instance - https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html
Using Terraform can I create temporary credentials (say 10 minutes TTL) and grant them to my AWS EC2 instance, so that when my Bash script runs during cloud-init it can retrieve the password from the AWS Secrets Manager?
I have seen that on the Terraform aws_instance resource, I can associate a iam_instance_profile and I have started by trying something like:
resource "aws_iam_instance_profile" "my_instance_iam_instance_profile" {
name = "my_instance_iam_instance_profile"
path = "/development/"
role = aws_iam_role.my_instance_iam_role.name
tags = {
Environment = "dev"
}
}
resource "aws_iam_role" "my_instance_iam_role" {
name = "my_instance_iam_role"
path = "/development/"
// TODO - what how to specify a temporary credential access to a specific secret in AWS Secrets Manager from EC2???
tags = {
Environment = "dev"
}
}
resource "aws_instance" "my_instance_1" {
ami = data.aws_ami.amazon_linux_2.id
instance_type = "m5a.2xlarge"
user_data = data.cloudinit_config.my_instance_1.rendered
iam_instance_profile = join("", [aws_iam_instance_profile.my_instance_iam_instance_profile.path, aws_iam_instance_profile.my_instance_iam_instance_profile.name])
...
}
Unfortunately I can't seem to find any details on what I should put in the Terraform aws_iam_role which would allow my EC2 instance to access the Secret in the AWS Secrets Manager for a temporary period of time.
Can anyone advise? I would also be open to alternative approaches as long as they are also secure.
Thanks
You can create aws_iam_policy or an inline policy which can allow access to certain SSM parameters based on date and time.
In case of inline policy, this can be attached to the instance role which would look something like this:
resource "aws_iam_role" "my_instance_iam_role" {
name = "my_instance_iam_role"
path = "/development/"
inline_policy {
name = "my_inline_policy"
policy = jsonencode({
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": "ssm:GetParameters",
"Resource": "arn:aws:ssm:us-east-2:123456789012:parameter/development-*",
"Condition": {
"DateGreaterThan": {"aws:CurrentTime": "2020-04-01T00:00:00Z"},
"DateLessThan": {"aws:CurrentTime": "2020-06-30T23:59:59Z"}
}
}]
})
}
tags = {
Environment = "dev"
}
}
So in the end the suggestions from #ervin-szilagyi got me 90% of the way there... I then needed to make some small changes to his suggestion. I am including my updated changes here to hopefully help others who struggle with this.
My aws_iam_role that allows temporary access (10 minutes) to the password now looks like:
resource "aws_iam_role" "my_instance_iam_role" {
name = "my_instance_iam_role"
path = "/development/"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Sid = ""
Principal = {
Service = "ec2.amazonaws.com"
}
},
]
})
inline_policy {
name = "access_my_password_iam_policy"
policy = jsonencode({
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"secretsmanager:GetResourcePolicy",
"secretsmanager:GetSecretValue",
"secretsmanager:DescribeSecret",
"secretsmanager:ListSecretVersionIds"
],
"Resource": aws_secretsmanager_secret.my_password_secret.arn,
"Condition": {
"DateGreaterThan": { "aws:CurrentTime": timestamp() },
"DateLessThan": { "aws:CurrentTime": timeadd(timestamp(), "10m") }
}
},
{
"Effect": "Allow",
"Action": "secretsmanager:ListSecrets",
"Resource": "*"
}
]
})
}
tags = {
Environment = "dev"
}
}
To retrieve the password during cloud-init, in the end I switched to using the aws CLI command as opposed to cURL, which yielded a cloud-init config like the following:
data "cloudinit_config" "my_instance_1" {
gzip = true
base64_encode = true
part {
content_type = "text/x-shellscript"
filename = "setup-script.sh"
content = <<EOF
#!/usr/bin/env bash
# Retrieve SA password from AWS Secrets Manager
command="aws --output text --region ${local.aws_region} secretsmanager get-secret-value --secret-id ${aws_secretsmanager_secret.my_password_secret.id} --query SecretString"
max_retry=5
counter=0
until my_password=$($command)
do
sleep 1
[[ counter -eq $max_retry ]] && echo "Failed!" && exit 1
echo "Attempt #$counter - Unable to retrieve AWS Secret, trying again..."
((counter++))
done
server_password=$my_password /opt/srv/bin/install.sh
EOF
}
}
There are two main ways to achieve this:
pass the value as is during the instance creation with terraform
post-bootstrap invocation of some script
Your approach of polling it in the cloud-init is a hybrid one, which is perfectly fine, but I'm not sure whether you actually need to go down that route.
Let's explore the first option, where you do everything in terraform. We have two sub-options there depending on where you create the secret and the instance within the same terraform execution run (within the same folder in which the code resides) or it's a two step process, where you create the secret first, and then the instance, as it has a minor difference between the two on how to pass the secret value as a var to the script.
Case A: in case they are created together:
You can pass the password directly to the script.
resource "random_password" "my_password" {
length = 16
lower = true
upper = true
number = true
special = true
override_special = "##$%"
}
resource "aws_secretsmanager_secret" "my_password_secret" {
name = "/development/my_password"
}
resource "aws_secretsmanager_secret_version" "my_password_secret_version" {
secret_id = aws_secretsmanager_secret.my_password_secret.id
secret_string = random_password.my_password.result
}
data "cloudinit_config" "my_instance_1" {
gzip = true
base64_encode = true
part {
content_type = "text/x-shellscript"
filename = "setup-script.sh"
content = <<EOF
#!/usr/bin/env bash
server_password=${random_password.my_password.result} /opt/srv/bin/install.sh
EOF
}
}
Case B: if they are created in separate folders
You could use a data resource to get the secret value from terraform (the role with which you are deploying your terraform code will need permissions GetSecret)
data "aws_secretsmanager_secret_version" "my_password" {
secret_id = "/development/my_password"
}
data "cloudinit_config" "my_instance_1" {
gzip = true
base64_encode = true
part {
content_type = "text/x-shellscript"
filename = "setup-script.sh"
content = <<EOF
#!/usr/bin/env bash
server_password=${data.aws_secretsmanager_secret_version.my_password.secret_string} /opt/srv/bin/install.sh
EOF
}
}
In both cases you wouldn't need to assign SSM permissions to the EC2 instance profile attached to the instance, you won't need to use curl or other means in the script, and the password would not be part of your bash script.
It will be stored in your terraform state, so you should make sure that the access to it is restricted.
Even with the hybrid approach where you are going to get the secret from the secret manager during the instance bootstrap, the password would still be stored in your state as you are creating that secret with resource "random_password" as per the Terraform Sensitive data in state.
Now, let's look at option 2. It is very similar to your approach, but instead of doing it in the user-data, you can use Systems Manager Run Command to start your installation script as a post-bootstrap step. Then depending on how do you invoke the script, whether it is present locally on the instance, or you are using a document with a State Manager you can either pass the secret to it as a variable again, or get it from the Secrets Manager with aws-cli or curl, or whatever you prefer (which will require the necessary level of IAM permissions).

Invoking AWS Lambda via Unsigned POST to REST API

I want to have a form on my Jekyll website that visitors can fill out, and the form action should POST to an AWS Lambda function. No JavaScript is allowed on the website, so the POST must not require signing.
I want the simplest possible setup, and do not need high security. If there is a way to avoid using AWS API Gateway to create an HTTP API, and somehow have the Lambda function directly receive the POST from the user's web browser, that would be perfect. If API Gateway is required, then the simplest solution would be best.
I want to use command line commands exclusively (not a web browser) to work with the AWS API. This allows for a scripted solution.
I've spent some time on the problem, and here is what I've got. I've marked questions in the deploy script with TODO. There is some extra code in that script which might not be needed. Problem is, I'm unsure what to delete because I just can't figure out how to provide the POST to the lambda.
The scripts use jq and yq so the bash scripts can parse JSON and YAML, respectively.
_config.yml
aws:
cloudfront:
distributionId: "" # Provide value if CloudFront is used on this site
lambda:
addSubscriber:
custom: # TODO change these values to suit your website
iamRoleName: lambda-ex
name: addSubscriberAwsLambdaSample
handler: addSubscriberAwsLambda.lambda_handler
runtime: python3.8
computed: # These values are computed by the _bin/awsLambda setup and deploy scripts
arn: arn:aws:lambda:us-east-1:031372724784:function:addSubscriberAwsLambdaSample:3
iamRoleArn: arn:aws:iam::031372724784:role/lambda-ex
utils source bash script
#!/bin/bash
function readYaml {
# $1 - path
yq r _config.yml "$1"
}
function writeYaml {
# $1 - path
# $2 - value
yq w -i _config.yml "$1" "$2"
}
# AWS Lambda values
export LAMBDA_IAM_ROLE_ARN="$( readYaml aws.lambda.addSubscriber.computed.iamRoleArn )"
export LAMBDA_NAME="$( readYaml aws.lambda.addSubscriber.custom.name )"
export LAMBDA_RUNTIME="$( readYaml aws.lambda.addSubscriber.custom.runtime )"
export LAMBDA_HANDLER="$( readYaml aws.lambda.addSubscriber.custom.handler )"
export LAMBDA_IAM_ROLE_NAME="$( readYaml aws.lambda.addSubscriber.custom.iamRoleName )"
export PACKAGE_DIR="${GIT_ROOT}/_package"
export LAMBDA_ZIP="${PACKAGE_DIR}/function.zip"
# Misc values
export TITLE="$( readYaml title )"
export URL="$( readYaml url )"
export DOMAIN="$( echo "$URL" | sed -n -e 's,^https\?://,,p' )"
setup bash script
#!/bin/bash
# Inspired by https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-awscli.html
SOURCE_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
GIT_ROOT="$( git rev-parse --show-toplevel )"
cd "${GIT_ROOT}"
source _bin/utils
# Define the execution role that gives an AWS Lambda function permission to access AWS resources.
read -r -d '' ROLE_POLICY_JSON <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
EOF
# If a role named $LAMBDA_IAM_ROLE_NAME is already defined then use it
ROLE_RESULT="$( aws iam get-role --role-name "$LAMBDA_IAM_ROLE_NAME" 2> /dev/null )"
if [ $? -ne 0 ]; then
ROLE_RESULT="$( aws iam create-role \
--role-name "$LAMBDA_IAM_ROLE_NAME" \
--assume-role-policy-document "$ROLE_POLICY_JSON"
)"
fi
LAMBDA_IAM_ROLE_ARN="$( jq -r .Role.Arn <<< "$ROLE_RESULT" )"
writeYaml aws.lambda.addSubscriber.computed.iamRoleArn "$LAMBDA_IAM_ROLE_ARN"
deploy bash script
# Call this script after the setup script has created the IAM role
# that gives the addSubscriber AWS Lambda function permission to access AWS resources
#
# 1) This script builds the AWS Lambda package and deploys it, with permissions.
# Any previous version of the AWS Lambda is deleted.
#
# 2) The newly (re)created AWS Lambda ARN is stored in _config.yml
#
# 3) An AWS Gateway HTTP API is created so static web pages can POST subscriber information to the AWS Lambda function.
# Because the web page is not allowed to have JavaScript, the POST is unsigned.
# *** The API must allow for an unsigned POST!!! ***
# Set cwd to the git project root
GIT_ROOT="$( git rev-parse --show-toplevel )"
cd "${GIT_ROOT}"
# Load configuration environment variables from _bin/utils:
# DOMAIN, LAMBDA_IAM_ROLE_ARN, LAMBDA_IAM_ROLE_NAME, LAMBDA_HANDLER, LAMBDA_NAME, LAMBDA_RUNTIME, LAMBDA_ZIP, PACKAGE_DIR, and URL
source _bin/utils
# Directory that this script resides in
SOURCE_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
echo "Building the AWS Lambda and packaging it into a zip file"
"$SOURCE_DIR/package" "$PACKAGE_DIR" > /dev/null
# Check to see if the Lambda function already exists.
LAMBDA="$( aws lambda list-functions | jq ".Functions[] | select(.FunctionName | contains(\"$LAMBDA_NAME\"))" )"
if [ -z "$LAMBDA" ]; then
echo "The AWS Lambda function '$LAMBDA_NAME' does not exist yet, so create it"
LAMBDA_METADATA="$( aws lambda create-function \
--description "Add subscriber to the MailChimp list with ID '$MC_LIST_ID_MSLINN' for the '$DOMAIN' website" \
--environment "{
\"Variables\": {
\"MC_API_KEY_MSLINN\": \"$MC_API_KEY_MSLINN\",
\"MC_LIST_ID_MSLINN\": \"$MC_LIST_ID_MSLINN\",
\"MC_USER_NAME_MSLINN\": \"$MC_USER_NAME_MSLINN\"
}
}" \
--function-name "$LAMBDA_NAME" \
--handler "$LAMBDA_HANDLER" \
--role "arn:aws:iam::${AWS_ACCOUNT_ID}:role/$LAMBDA_IAM_ROLE_NAME" \
--runtime "$LAMBDA_RUNTIME" \
--zip-file "fileb://$LAMBDA_ZIP" \
| jq -S .
)"
LAMBDA_ARN="$( jq -r .Configuration.FunctionArn <<< "$LAMBDA_METADATA" )"
else
echo "The AWS Lambda function '$LAMBDA_NAME' already exists, so update it"
LAMBDA_METADATA="$( aws lambda update-function-code \
--function-name "$LAMBDA_NAME" \
--publish \
--zip-file "fileb://$LAMBDA_ZIP" \
| jq -S .
)"
LAMBDA_ARN="$( jq -r .FunctionArn <<< "$LAMBDA_METADATA" )"
fi
echo "AWS Lambda ARN is $LAMBDA_ARN"
writeYaml aws.lambda.addSubscriber.computed.arn "$LAMBDA_ARN"
echo "Attach the AWSLambdaBasicExecutionRole managed policy to $LAMBDA_IAM_ROLE_NAME."
aws iam attach-role-policy \
--role-name $LAMBDA_IAM_ROLE_NAME \
--policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
#### Integrate with API Gateway for REST
#### Some or all of the following code is probably not required
GATEWAY_NAME="addSubscriberTo_$MC_LIST_ID_MSLINN"
API_GATEWAYS="$( aws apigateway get-rest-apis )"
if [ "$( jq ".items[] | select(.name | contains(\"$GATEWAY_NAME\"))" <<< "$API_GATEWAYS" )" ]; then
echo "API gateway '$GATEWAY_NAME' already exists."
else
echo "Creating API gateway '$GATEWAY_NAME'."
API_JSON="$( aws apigateway create-rest-api \
--name "$GATEWAY_NAME" \
--description "API for adding a subscriber to the Mailchimp list with ID '$MC_LIST_ID_MSLINN' for the '$DOMAIN' website"
)"
REST_API_ID="$( jq -r .id <<< "$API_JSON" )"
API_RESOURCES="$( aws apigateway get-resources --rest-api-id $REST_API_ID )"
ROOT_RESOURCE_ID="$( jq -r .items[0].id <<< "$API_RESOURCES" )"
NEW_RESOURCE="$( aws apigateway create-resource \
--rest-api-id "$REST_API_ID" \
--parent-id "$RESOURCE_ID" \
--path-part "{proxy+}"
)"
NEW_RESOURCE_ID=$( jq -r .id <<< $NEW_RESOURCE )
if false; then
# Is this step useful for any reason?
aws apigateway put-method \
--authorization-type "NONE" \
--http-method ANY \
--resource-id "$NEW_RESOURCE_ID" \
--rest-api-id "$REST_API_ID"
fi
# The following came from https://docs.aws.amazon.com/apigateway/latest/developerguide/set-up-lambda-proxy-integrations.html#set-up-lambda-proxy-integration-using-cli
# Instead of supplying an IAM role for --credentials, call the add-permission command to add resource-based permissions.
# I need an example of this.
# Alternatively, how to obtain IAM_ROLE_ID? Again, I need an example.
aws apigateway put-integration \
--credentials "arn:aws:iam::${IAM_ROLE_ID}:role/apigAwsProxyRole" \
--http-method ANY \
--integration-http-method POST \
--rest-api-id "$REST_API_ID" \
--resource-id "$NEW_RESOURCE_ID" \
--type AWS_PROXY \
--uri arn:aws:apigateway:`aws configure get region`:lambda:path/2015-03-31/functions/$LAMBDA_ARN
if [ "$LAMBDA_TEST"]; then
# Deploy the API to a test stage
aws apigateway create-deployment \
--rest-api-id "$REST_API_ID" \
--stage-name test
else
# Deploy the API live
aws apigateway create-deployment \
--rest-api-id "$REST_API_ID" \
--stage-name TODO_WhatNameGoesHere
fi
fi
echo "Check out the defined lambdas at https://console.aws.amazon.com/lambda/home?region=us-east-1#/functions"
Bash scripting infrastructure is bad. You might get it right eventually, but there are tools that make the process infinitely easier.
I prefer Terraform, and here's how an API Gateway + lambda would look like:
provider "aws" {
}
# lambda
resource "random_id" "id" {
byte_length = 8
}
data "archive_file" "lambda_zip" {
type = "zip"
output_path = "/tmp/lambda.zip"
source {
content = <<EOF
module.exports.handler = async (event, context) => {
// write the lambda code here
}
};
EOF
filename = "main.js"
}
}
resource "aws_lambda_function" "lambda" {
function_name = "${random_id.id.hex}-function"
filename = data.archive_file.lambda_zip.output_path
source_code_hash = data.archive_file.lambda_zip.output_base64sha256
handler = "main.handler"
runtime = "nodejs12.x"
role = aws_iam_role.lambda_exec.arn
}
data "aws_iam_policy_document" "lambda_exec_role_policy" {
statement {
actions = [
"logs:CreateLogStream",
"logs:PutLogEvents"
]
resources = [
"arn:aws:logs:*:*:*"
]
}
}
resource "aws_cloudwatch_log_group" "loggroup" {
name = "/aws/lambda/${aws_lambda_function.lambda.function_name}"
retention_in_days = 14
}
resource "aws_iam_role_policy" "lambda_exec_role" {
role = aws_iam_role.lambda_exec.id
policy = data.aws_iam_policy_document.lambda_exec_role_policy.json
}
resource "aws_iam_role" "lambda_exec" {
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Effect": "Allow"
}
]
}
EOF
}
# api gw
resource "aws_apigatewayv2_api" "api" {
name = "api-${random_id.id.hex}"
protocol_type = "HTTP"
target = aws_lambda_function.lambda.arn
}
resource "aws_lambda_permission" "apigw" {
action = "lambda:InvokeFunction"
function_name = aws_lambda_function.lambda.arn
principal = "apigateway.amazonaws.com"
source_arn = "${aws_apigatewayv2_api.api.execution_arn}/*/*"
}
output "domain" {
value = aws_apigatewayv2_api.api.api_endpoint
}
See that the last 2 resources are the API Gateway, all the previous ones are for the Lambda function.

Referencing gitlab secrets in Terraform

I am quite new to Terraforms and gitlab CI and there is something that I am trying to do here with it.
I want to use Terraform to create an IAM user and a S3 bucket. Using policies allow certain operations on this S3 bucket to this IAM user. Have the IAM user's credentials saved in the artifactory.
Now the above is going to be my core module.
The core module looks something like the below:
Contents of : aws-s3-iam-combo.git
(The credentials for the IAM user using which all the Terraform would be run, say admin-user, would be stored in gitlab secrets.)
main.tf
resource "aws_s3_bucket" "bucket" {
bucket = "${var.name}"
acl = "private"
force_destroy = "true"
tags {
environment = "${var.tag_environment}"
team = "${var.tag_team}"
}
policy =<<EOF
{
"Version": "2008-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "${aws_iam_user.s3.arn}"
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::${var.name}",
"arn:aws:s3:::${var.name}/*"
]
}
]
}
EOF
}
resource "aws_iam_user" "s3" {
name = "${var.name}-s3"
force_destroy = "true"
}
resource "aws_iam_access_key" "s3" {
user = "${aws_iam_user.s3.name}"
}
resource "aws_iam_user_policy" "s3_policy" {
name = "${var.name}-policy-s3"
user = "${aws_iam_user.s3.name}"
policy =<<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::${var.name}",
"arn:aws:s3:::${var.name}/*"
]
}
]
}
EOF
}
outputs.tf
output "bucket" {
value = "${aws_s3_bucket.bucket.bucket}"
}
output "bucket_id" {
value = "${aws_s3_bucket.bucket.id}"
}
output "iam_access_key_id" {
value = "${aws_iam_access_key.s3.id}"
}
output "iam_access_key_secret" {
value = "${aws_iam_access_key.s3.secret}"
}
variables.tf
variable "name" {
type = "string"
}
variable "tag_team" {
type = "string"
default = ""
}
variable "tag_environment" {
type = "string"
default = ""
}
variable "versioning" {
type = "string"
default = false
}
variable "profile" {
type = "string"
default = ""
}
Anyone in the organization who now needs to create S3 buckets, would need to create a new repo, something of the form:
main.tf
module "aws-s3-john-doe" {
source = "git::https://git#gitlab-address/terraform/aws-s3-iam-combo.git?ref=v0.0.1"
name = "john-doe"
tag_team = "my_team"
tag_environment = "staging"
}
gitlab-ci.yml
stages:
- plan
- apply
plan:
image: hashicorp/terraform
stage: plan
script:
- terraform init
- terraform plan
apply:
image: hashicorp/terraform
stage: apply
script:
- terraform init
- terraform apply
when: manual
only:
- master
And then the pipeline would trigger and when this repo gets merged to master, the resources (S3 and IAM user) would be created and the user would have this IAM user's credentials.
Now the problem is that we have multiple AWS accounts. So say if a dev wants to create an S3 in a certain account, it would not be possible with the above set up as the admin-user, whose creds are in gitlab secrets, is only for one account alone.
Now I don't understand how do I achieve the above requirement of mine. I have the below idea: (Please suggest if there's a better way to do this)
Have multiple different creds set up in gitlab secrets for each AWS account in question
Take user input, specifying the AWS account they want the resources created in, as a variable. So something like say:
main.tf
module "aws-s3-john-doe" {
source = "git::https://git#gitlab-address/terraform/aws-s3-iam-combo.git?ref=v0.0.1"
name = "john-doe"
tag_team = "my_team"
tag_environment = "staging"
aws_account = "account1"
}
And then in the aws-s3-iam-combo.git main.tf somehow read the creds for account1 from the gitlab secrets.
Now I do not know how achieve the above, like how do i read from gitlab the required secret variable etc.
Can someone please help here?
you asked this some time ago, but maybe my idea still helps the one or the other...
You can do this with envsubst (requires the pkg gettext to be installed on your runner or in the Docker image used to run the pipeline).
Here is an example:
First, in the project settings you set your different user accounts as environment variables (project secrets:
SECRET_1: my-secret-1
SECRET_2: my-secret-2
SECRET_3: my-secret-3
Then, create a file that holds a Terraform variable, let's name it vars_template.tf:
variable "gitlab_secrets" {
description = "Variables from GitLab"
type = "map"
default = {
secret_1 = "$SECRET_1"
secret_2 = "$SECRET_2"
secret_3 = "$SECRET_3"
}
}
In your CI pipeline, you can now configure the following:
plan:dev:
stage: plan dev
script:
- envsubst < vars_template.tf > ./vars_envsubst.tf
- rm vars_template.tf
- terraform init
- terraform plan -out "planfile_dev"
artifacts:
paths:
- environments/dev/planfile_dev
- environments/dev/vars_envsubst.tf
apply:dev:
stage: apply dev
script:
- cd environments/dev
- rm vars_template.tf
- terraform init
- terraform apply -input=false "planfile_dev"
dependencies:
- plan:dev
It's important to note that the original vars_template.tf has to be deleted, otherwise Terraform will throw an error that the variable is defined multiple times. You could circumvent this by storing the template file in a directory which is outside the Terraform working directory though.
But from the Terraform state you can see that the variable values where correctly substituted:
"outputs": {
"gitlab_secrets": {
"sensitive": false,
"type": "map",
"value": {
"secret_1": "my-secret-1",
"secret_2": "my-secret-2",
"secret_3": "my-secret-3"
}
}
}
You can then access the values with "${vars.gitlab_secrets["secret_1"]}" in your Terraform resources etc.
UPDATE: Note that this method will store the secrets in the Terraform state file, which can be a potential security issue if the file is stored in an S3 bucket for collaborative work with Terraform. The bucket should at least be encrypted. In addition, it's recommended to limit the access to the files with ACLs so that, e.g., only a user terraform has access to it. And, of course, a user could reveil the secrets via Terraoform outputs...