Append principal to bucket policy document instead overwrite it - amazon-web-services

I'm developing a SPA with several envs: dev, preprod, prod
Each env have a corresponding CloudFront distribution and bucket website.
We also have a static website with user manual that is served on behavior /documentation/*
This static website is stored on a separate bucket
All environments share the same documentation, so there is only one bucket for all envs.
The project is a company portal, so user documentation should not be accessible publicly.
To achieve that, we are using OAI, so bucket is accessible only through CloudFront (a lambda#edge ensure user has a valid token and redirect him otherwise, so the documentation is private).
Everything is fine when I deploy on dev using
terraform workspace select dev
terraform apply -var-file=dev.tfvars
But when I try to deploy on preprod
terraform workspace select preprod
terraform apply -var-file=preprod.tfvars
Terraform changes OAI ID this way
# module.s3.aws_s3_bucket_policy.documentation_policy will be updated in-place
~ resource "aws_s3_bucket_policy" "documentation_policy" {
bucket = "my-bucket"
~ policy = jsonencode(
~ {
~ Statement = [
~ {
Action = "s3:GetObject"
Effect = "Allow"
~ Principal = {
~ AWS = "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity E3U64NEVQ9IQHH" -> "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity E3ORU58OAALJAP"
}
Resource = "arn:aws:s3:::my-bucket/*"
Sid = ""
},
]
Version = "2012-10-17"
}
)
}
Whereas I would like the principal to added this way:
# module.s3.aws_s3_bucket_policy.documentation_policy will be updated in-place
~ resource "aws_s3_bucket_policy" "documentation_policy" {
bucket = "my-bucket"
~ policy = jsonencode(
~ {
Statement = [
{
Action = "s3:GetObject"
Effect = "Allow"
Principal = {
AWS = "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity E3U64NEVQ9IQHH"
}
Resource = "arn:aws:s3:::my-bucket/*"
Sid = ""
},
+ {
+ Action = "s3:GetObject"
+ Effect = "Allow"
+ Principal = {
+ AWS = "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity E3ORU58OAALJAP"
+ }
+ Resource = "arn:aws:s3:::my-bucket/*"
+ Sid = ""
+ },
]
Version = "2012-10-17"
}
)
}
Is there any way to achieve this using terraform 0.13.5
For information, here is my documentation-bucket.tf which I import in each workspace once created
resource "aws_s3_bucket" "documentation" {
bucket = var.documentation_bucket
tags = {
BillingProject = var.billing_project
Environment = var.env
Terraform = "Yes"
}
logging {
target_bucket = var.website_logs_bucket
target_prefix = "s3-access-logs/${var.documentation_bucket}/"
}
lifecycle {
prevent_destroy = true
}
}
data "aws_iam_policy_document" "documentation" {
statement {
actions = ["s3:GetObject"]
resources = ["${aws_s3_bucket.documentation.arn}/*"]
principals {
type = "AWS"
identifiers = [aws_cloudfront_origin_access_identity.origin_access_identity.iam_arn]
}
}
}
resource "aws_s3_bucket_policy" "documentation_policy" {
bucket = aws_s3_bucket.documentation.id
policy = data.aws_iam_policy_document.documentation.json
}
Best regards

Assumptions:
Based on what you said it seems you manage the same resource in different state files (assumption based on "[...] which I import in each workspace once created")
You basically created a split-brain situation by doing so.
Assumption number two: you are deploying a single S3 bucket and multiple CloudFront distributions accessing this single bucket all in the same AWS Account.
Answer:
While it is totally fine to do so, this is not how it is supposed to be set up. A single resource should only be managed by a single terraform state (workspace) or you will see this expected but unwanted behavior of having an unstable state.
I would suggest to manage the S3 bucket in a single workspace configuration or even create a new workspace called 'shared'.
In this workspace, you can use terraform_remote_state data source to import the state of the other workspaces and build a policy including all your OAIs extracted from the other states. Of course, you can do so without creating a new shared workspace.
I hope this helps, while it might not be the expected solution - and maybe my assumptions are wrong.
Last words:
It's not considered good practice to share resources between environments, as data will most likely stay when you decommission environments, and managing access can get complex and insecure.
Better keep versions of the environments as close as possible like in Dev/Prod Parity of the 12 factors app, But try not to share resources. If you feel you need to share resources, take some time, and challenge your architecture again.

Related

How to securely allow access to AWS Secrets Manager with Terraform and cloud-init

I have the situation whereby I am having Terraform create a random password and store it into AWS Secrets Manager.
My Terraform password and secrets manager config:
resource "random_password" "my_password" {
length = 16
lower = true
upper = true
number = true
special = true
override_special = "##$%"
}
resource "aws_secretsmanager_secret" "my_password_secret" {
name = "/development/my_password"
}
resource "aws_secretsmanager_secret_version" "my_password_secret_version" {
secret_id = aws_secretsmanager_secret.my_password_secret.id
secret_string = random_password.my_password.result
}
The above works well. However I am not clear on how to achieve my final goal...
I have an AWS EC2 Instance which is also configured via Terraform, when the system boots it executes some cloud-init config which runs a setup script (Bash script). The Bash setup script needs to install some server software and set a password for that server software. I am not certain how to securely access my_password from that Bash script during setup.
My Terraform config for the instance and cloud-init config:
resource "aws_instance" "my_instance_1" {
ami = data.aws_ami.amazon_linux_2.id
instance_type = "m5a.2xlarge"
user_data = data.cloudinit_config.my_instance_1.rendered
...
}
data "cloudinit_config" "my_instance_1" {
gzip = true
base64_encode = true
part {
content_type = "text/x-shellscript"
filename = "setup-script.sh"
content = <<EOF
#!/usr/bin/env bash
my_password=`<MY PASSWORD IS NEEDED HERE>` # TODO retrieve via cURL call to Secrets Manager API?
server_password=$my_password /opt/srv/bin/install.sh
EOF
}
}
I need to be able to securely retrieve the password from the AWS Secrets Manager when the cloud-init script runs, as I have read that embedding it in the bash script is considered insecure.
I have also read that AWS has the notion of Temporary Credentials, and that these can be associated with an EC2 instance - https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html
Using Terraform can I create temporary credentials (say 10 minutes TTL) and grant them to my AWS EC2 instance, so that when my Bash script runs during cloud-init it can retrieve the password from the AWS Secrets Manager?
I have seen that on the Terraform aws_instance resource, I can associate a iam_instance_profile and I have started by trying something like:
resource "aws_iam_instance_profile" "my_instance_iam_instance_profile" {
name = "my_instance_iam_instance_profile"
path = "/development/"
role = aws_iam_role.my_instance_iam_role.name
tags = {
Environment = "dev"
}
}
resource "aws_iam_role" "my_instance_iam_role" {
name = "my_instance_iam_role"
path = "/development/"
// TODO - what how to specify a temporary credential access to a specific secret in AWS Secrets Manager from EC2???
tags = {
Environment = "dev"
}
}
resource "aws_instance" "my_instance_1" {
ami = data.aws_ami.amazon_linux_2.id
instance_type = "m5a.2xlarge"
user_data = data.cloudinit_config.my_instance_1.rendered
iam_instance_profile = join("", [aws_iam_instance_profile.my_instance_iam_instance_profile.path, aws_iam_instance_profile.my_instance_iam_instance_profile.name])
...
}
Unfortunately I can't seem to find any details on what I should put in the Terraform aws_iam_role which would allow my EC2 instance to access the Secret in the AWS Secrets Manager for a temporary period of time.
Can anyone advise? I would also be open to alternative approaches as long as they are also secure.
Thanks
You can create aws_iam_policy or an inline policy which can allow access to certain SSM parameters based on date and time.
In case of inline policy, this can be attached to the instance role which would look something like this:
resource "aws_iam_role" "my_instance_iam_role" {
name = "my_instance_iam_role"
path = "/development/"
inline_policy {
name = "my_inline_policy"
policy = jsonencode({
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": "ssm:GetParameters",
"Resource": "arn:aws:ssm:us-east-2:123456789012:parameter/development-*",
"Condition": {
"DateGreaterThan": {"aws:CurrentTime": "2020-04-01T00:00:00Z"},
"DateLessThan": {"aws:CurrentTime": "2020-06-30T23:59:59Z"}
}
}]
})
}
tags = {
Environment = "dev"
}
}
So in the end the suggestions from #ervin-szilagyi got me 90% of the way there... I then needed to make some small changes to his suggestion. I am including my updated changes here to hopefully help others who struggle with this.
My aws_iam_role that allows temporary access (10 minutes) to the password now looks like:
resource "aws_iam_role" "my_instance_iam_role" {
name = "my_instance_iam_role"
path = "/development/"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Sid = ""
Principal = {
Service = "ec2.amazonaws.com"
}
},
]
})
inline_policy {
name = "access_my_password_iam_policy"
policy = jsonencode({
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"secretsmanager:GetResourcePolicy",
"secretsmanager:GetSecretValue",
"secretsmanager:DescribeSecret",
"secretsmanager:ListSecretVersionIds"
],
"Resource": aws_secretsmanager_secret.my_password_secret.arn,
"Condition": {
"DateGreaterThan": { "aws:CurrentTime": timestamp() },
"DateLessThan": { "aws:CurrentTime": timeadd(timestamp(), "10m") }
}
},
{
"Effect": "Allow",
"Action": "secretsmanager:ListSecrets",
"Resource": "*"
}
]
})
}
tags = {
Environment = "dev"
}
}
To retrieve the password during cloud-init, in the end I switched to using the aws CLI command as opposed to cURL, which yielded a cloud-init config like the following:
data "cloudinit_config" "my_instance_1" {
gzip = true
base64_encode = true
part {
content_type = "text/x-shellscript"
filename = "setup-script.sh"
content = <<EOF
#!/usr/bin/env bash
# Retrieve SA password from AWS Secrets Manager
command="aws --output text --region ${local.aws_region} secretsmanager get-secret-value --secret-id ${aws_secretsmanager_secret.my_password_secret.id} --query SecretString"
max_retry=5
counter=0
until my_password=$($command)
do
sleep 1
[[ counter -eq $max_retry ]] && echo "Failed!" && exit 1
echo "Attempt #$counter - Unable to retrieve AWS Secret, trying again..."
((counter++))
done
server_password=$my_password /opt/srv/bin/install.sh
EOF
}
}
There are two main ways to achieve this:
pass the value as is during the instance creation with terraform
post-bootstrap invocation of some script
Your approach of polling it in the cloud-init is a hybrid one, which is perfectly fine, but I'm not sure whether you actually need to go down that route.
Let's explore the first option, where you do everything in terraform. We have two sub-options there depending on where you create the secret and the instance within the same terraform execution run (within the same folder in which the code resides) or it's a two step process, where you create the secret first, and then the instance, as it has a minor difference between the two on how to pass the secret value as a var to the script.
Case A: in case they are created together:
You can pass the password directly to the script.
resource "random_password" "my_password" {
length = 16
lower = true
upper = true
number = true
special = true
override_special = "##$%"
}
resource "aws_secretsmanager_secret" "my_password_secret" {
name = "/development/my_password"
}
resource "aws_secretsmanager_secret_version" "my_password_secret_version" {
secret_id = aws_secretsmanager_secret.my_password_secret.id
secret_string = random_password.my_password.result
}
data "cloudinit_config" "my_instance_1" {
gzip = true
base64_encode = true
part {
content_type = "text/x-shellscript"
filename = "setup-script.sh"
content = <<EOF
#!/usr/bin/env bash
server_password=${random_password.my_password.result} /opt/srv/bin/install.sh
EOF
}
}
Case B: if they are created in separate folders
You could use a data resource to get the secret value from terraform (the role with which you are deploying your terraform code will need permissions GetSecret)
data "aws_secretsmanager_secret_version" "my_password" {
secret_id = "/development/my_password"
}
data "cloudinit_config" "my_instance_1" {
gzip = true
base64_encode = true
part {
content_type = "text/x-shellscript"
filename = "setup-script.sh"
content = <<EOF
#!/usr/bin/env bash
server_password=${data.aws_secretsmanager_secret_version.my_password.secret_string} /opt/srv/bin/install.sh
EOF
}
}
In both cases you wouldn't need to assign SSM permissions to the EC2 instance profile attached to the instance, you won't need to use curl or other means in the script, and the password would not be part of your bash script.
It will be stored in your terraform state, so you should make sure that the access to it is restricted.
Even with the hybrid approach where you are going to get the secret from the secret manager during the instance bootstrap, the password would still be stored in your state as you are creating that secret with resource "random_password" as per the Terraform Sensitive data in state.
Now, let's look at option 2. It is very similar to your approach, but instead of doing it in the user-data, you can use Systems Manager Run Command to start your installation script as a post-bootstrap step. Then depending on how do you invoke the script, whether it is present locally on the instance, or you are using a document with a State Manager you can either pass the secret to it as a variable again, or get it from the Secrets Manager with aws-cli or curl, or whatever you prefer (which will require the necessary level of IAM permissions).

Terraform recreates API permissions for Lambda on each apply causing downtime (lambda module, serverless framework, VPC)

I have a Lambda created via terraform aws lambda module. It points to a versioned Lambda because I employ reserved concurrency. Also it resides in a VPC.
The config looks like so:
module "my-lambda" {
source = "terraform-aws-modules/lambda/aws"
version = "~> v1.45.0"
function_name = "${local.lambda_name}"
description = local.lambda_name
handler = "handler.handler"
runtime = "python3.8"
hash_extra = local.lambda_name
attach_tracing_policy = true
tracing_mode = "Active"
publish = true
vpc_security_group_ids = [
// required VPC security groups
]
vpc_subnet_ids = var.private_subnet_ids
source_path = [
// ... abriged
]
build_in_docker = true
provisioned_concurrent_executions = var.provisioned_concurrency_lambdas
create_current_version_allowed_triggers = true
create_unqualified_alias_allowed_triggers = false
allowed_triggers = {
APIGateway = {
service = "apigateway"
source_arn = "${module.my_api_gateway.this_apigatewayv2_api_execution_arn}/*"
}
}
attach_policies = true
policies = [
// policies needed for a VPC lambda
]
}
I have found that in terraform plan, even if I do not do any changes and repeatedly issue terraform plan, this replacements are occurring - which leads to re-creation of API Gateway permissions and essentially a small downtime:
# module.my_entire_api.module.my-lambda.aws_lambda_permission.current_version_triggers["APIGateway"] must be replaced
-/+ resource "aws_lambda_permission" "current_version_triggers" {
~ id = "APIGateway" -> (known after apply)
~ qualifier = "1" -> (known after apply) # forces replacement
# (5 unchanged attributes hidden)
}
# module.my_entire_api.module.my-lambda.aws_lambda_provisioned_concurrency_config.current_version[0] must be replaced
-/+ resource "aws_lambda_provisioned_concurrency_config" "current_version" {
~ id = "env-my-lambda:1" -> (known after apply)
~ qualifier = "1" -> (known after apply) # forces replacement
# (2 unchanged attributes hidden)
}
There are some other Lambdas that do not run in VPC. Presently I do not see such effect in these, while I am not completely sure that it never happens.
To be certain I do not care about concurrency config, as recreation of it does not cause downtime. But I want to configure the module such that aws_lambda_permission does not get re-created. How can I possibly do that?
An issue in terraform-provider-aws : terraform-provider-aws 3.13.0 and later including 3.25.0 cause lambdas in a VPC to be updated on every apply #17385
From the documentation How to deploy and manage Lambda Functions?
publish = true
Typically, Lambda Function resource updates when source code changes. If publish = true is specified a new Lambda Function version will also be created.
publish flag
variable "publish" {
description = "Whether to publish creation/change as new Lambda Function Version."
type = bool
default = false
}
aws_lambda_permission
resource "aws_lambda_permission" "current_version_triggers" {
for_each = var.create && var.create_function && !var.create_layer && var.create_current_version_allowed_triggers ? var.allowed_triggers : {}
function_name = aws_lambda_function.this[0].function_name
qualifier = aws_lambda_function.this[0].version
So every time you deploy a new version is being deployed which is referenced in the corresponding resource to update the policy. Hence it is triggering updates every time.
In AWS Lambda function, what is the difference between deploy and publish?
Depending on where you are deriving your context for deploy and publish, normally deploy means redeploying your lambda with new code whereas publish is increasing your lambda version (not redeploying code).
The problem I was facing boils down to several things.
When you do Provisioned Concurrency, you must "publish" your lambda so they have a proper version qualifier (something like "1" and NOT $LATEST), therefore Lambda permissions to allow Gateway to call Lambda are tied to specific Lambda version. When you make another version, these permissions are destroyed and created anew for the new Lambda version. create_before_destroy lifecycle flag can possibly help. I haven't seen these recreated for non-VPC lambdas when there are no changes; when Lambda is changed, there are few minutes between deleting and recreating the resrved concurrency and permissions inside Lambda for the API Gateway.
VPC Lambdas, in addition, experience a recreation of concurrency and permissions even if Lambda not changed, Terraform bug https://github.com/hashicorp/terraform-provider-aws/issues/17385.
Solution seems to be to not deal with permissions of Lambdas at all, but instead, provide API Gateway "credentials" (aka Role with Lambda InvokeFunction rights) that allow it to call Lambdas. This way, when an AWS Gateway "integration" (= Lambda) is called, it assumes the Role. Permissions on Lambda side are not needed in such case. My tests show that in that case the sequence of updating the Lambda is correct: no unecessary recreations of resources for VPC lambdas, and when a Lambda is being updated, first a new version is deployed, and then API Gateway shifts to it (hence, no downtime happens). The production tests under certain load also confirmed that we do not see an outage in practice.
Here's the snippet for API Gateway configuration that permits Lambda invocations. It follows a recipe found at https://medium.com/#jun711.g/aws-api-gateway-invoke-lambda-function-permission-6c6834f14b61.
resource "aws_iam_role" "api_gateway_credentials_call_lambda" {
assume_role_policy = jsonencode({
Version = "2012-10-17",
Statement = [
{
Effect = "Allow",
Principal = {
Service = "lambda.amazonaws.com"
},
Action = "sts:AssumeRole"
},
{
Effect = "Allow",
Principal = {
Service = "apigateway.amazonaws.com"
},
Action = "sts:AssumeRole"
}
]
})
inline_policy {
name = "permission-apigw-lambda-invokefunction"
policy = jsonencode({
"Version" : "2012-10-17",
"Statement" : [
{
Effect = "Allow",
Action = "lambda:InvokeFunction",
Resource = "arn:aws:lambda:*:${data.aws_caller_identity.current.account_id}:function:*"
}
]
})
}
}
Note that last Resource = instruction would allow all Lambdas to be called by this Role. You might want to restrict these rights to a sub-set of lambdas for increased security and less of human error.
Having this Role set up, I configure API gateway using a popular Module apigateway-v2 from serverless.tf framework:
module "api_gateway" {
source = "terraform-aws-modules/apigateway-v2/aws"
version = "~> 0.14.0"
# various parameters ...
# Routes and integrations
integrations = {
"GET /myLambda" = {
integration_type = "AWS_PROXY"
integration_http_method = "POST"
payload_format_version = "2.0"
lambda_arn = my_lambda_qualified_arn
# This line enables the permissions:
credentials_arn = aws_iam_role.api_gateway_credentials_call_lambda.arn
}

How to resolve access denied after saving a bad bucket policy?

Using terraform I've setup my stack, I just altered the bucket policy and applied but now I've found the bucket policy is denying all actions including management and altering the policy.
How might I update the policy so I can delete the bucket?
I am not able to access the bucket policy any more, but what was applied is still in my terraform state. If I attempt a destroy on the bucket it reveals the following (I've masked id's and account).
The following is just a sample as there are 5 action blocks and each contains a dozen userid's.
- Statement = [
- {
- Action = [
- "s3:ListBucketVersions",
- "s3:ListBucketMultipartUploads",
- "s3:ListBucket",
]
- Condition = {
- StringLike = {
- aws:userid = [
- "AROAXXXXXXXXXXXXXXXXA:*",
- "AROAXXXXXXXXXXXXXXXXB:*",
]
}
- StringNotLike = {
- aws:userid = [
- "*:AROAAXXXXXXXXXXXXXXXA:user1",
- "*:AROAAXXXXXXXXXXXXXXXA:user2",
- "*:AROAAXXXXXXXXXXXXXXXA:*",
]
}
}
- Effect = "Deny"
- Principal = "*"
- Resource = "arn:aws:s3:::my-account-bucket-name"
- Sid = "Deny bucket-level read operations except for authorised users"
},
Based on the comments.
It seems that the new policy resulted in denying access to everyone. In such cases, AWS explains what to do in a blog post titled:
I accidentally denied everyone access to my Amazon S3 bucket. How do I regain access?
The process involves accessing the account as root user and deleting the bucket policy.

Cannot create elasticSearch Domain using terraform

I'm trying to create elasticsearch cluster using terraform.
Using terraform 0.11.13
Please can someone point out why I'm not able to create log groups? What is the Resource Access Policy? is it the same as the data "aws_iam_policy_document" I'm creating?
Note: I'm using elasticsearch_version = "7.9"
code:
resource "aws_cloudwatch_log_group" "search_test_log_group" {
name = "/aws/aes/domains/test-es7/index-logs"
}
resource "aws_elasticsearch_domain" "amp_search_test_es7" {
domain_name = "es7"
elasticsearch_version = "7.9"
.....
log_publishing_options {
cloudwatch_log_group_arn = "${aws_cloudwatch_log_group.search_test_log_group.arn}"
log_type = "INDEX_SLOW_LOGS"
enabled = true
}
access_policies = "${data.aws_iam_policy_document.elasticsearch_policy.json}"
}
data "aws_iam_policy_document" "elasticsearch_policy" {
version = "2012-10-17"
statement {
effect = "Allow"
principals {
identifiers = ["*"]
type = "AWS"
}
actions = ["es:*"]
resources = ["arn:aws:es:us-east-1:xxx:domain/test_es7/*"]
}
statement {
effect = "Allow"
principals {
identifiers = ["es.amazonaws.com"]
type = "Service"
}
actions = [
"logs:PutLogEvents",
"logs:PutLogEventsBatch",
"logs:CreateLogStream",
]
resources = ["arn:aws:logs:*"]
}
}
I'm getting this error
aws_elasticsearch_domain.test_es7: Error creating ElasticSearch domain: ValidationException: The Resource Access Policy specified for the CloudWatch Logs log group /aws/aes/domains/test-es7/index-logs does not grant sufficient permissions for Amazon Elasticsearch Service to create a log stream. Please check the Resource Access Policy.
For ElasticSearch (ES) to be able to write to CloudWatch (CW) Logs, you have to provide a resource-based policy on your CW logs.
This is achieved using aws_cloudwatch_log_resource_policy which is missing from your code.
In fact, TF docs have a ready to use example of how to do it for ES, thus you should be able to just copy and paste it.
ES access policies are different from CW log policies, as they determine who can do what on your ES domain. Thus, you would have to adjust that part of your code to meet your requirements.

Referencing gitlab secrets in Terraform

I am quite new to Terraforms and gitlab CI and there is something that I am trying to do here with it.
I want to use Terraform to create an IAM user and a S3 bucket. Using policies allow certain operations on this S3 bucket to this IAM user. Have the IAM user's credentials saved in the artifactory.
Now the above is going to be my core module.
The core module looks something like the below:
Contents of : aws-s3-iam-combo.git
(The credentials for the IAM user using which all the Terraform would be run, say admin-user, would be stored in gitlab secrets.)
main.tf
resource "aws_s3_bucket" "bucket" {
bucket = "${var.name}"
acl = "private"
force_destroy = "true"
tags {
environment = "${var.tag_environment}"
team = "${var.tag_team}"
}
policy =<<EOF
{
"Version": "2008-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "${aws_iam_user.s3.arn}"
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::${var.name}",
"arn:aws:s3:::${var.name}/*"
]
}
]
}
EOF
}
resource "aws_iam_user" "s3" {
name = "${var.name}-s3"
force_destroy = "true"
}
resource "aws_iam_access_key" "s3" {
user = "${aws_iam_user.s3.name}"
}
resource "aws_iam_user_policy" "s3_policy" {
name = "${var.name}-policy-s3"
user = "${aws_iam_user.s3.name}"
policy =<<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::${var.name}",
"arn:aws:s3:::${var.name}/*"
]
}
]
}
EOF
}
outputs.tf
output "bucket" {
value = "${aws_s3_bucket.bucket.bucket}"
}
output "bucket_id" {
value = "${aws_s3_bucket.bucket.id}"
}
output "iam_access_key_id" {
value = "${aws_iam_access_key.s3.id}"
}
output "iam_access_key_secret" {
value = "${aws_iam_access_key.s3.secret}"
}
variables.tf
variable "name" {
type = "string"
}
variable "tag_team" {
type = "string"
default = ""
}
variable "tag_environment" {
type = "string"
default = ""
}
variable "versioning" {
type = "string"
default = false
}
variable "profile" {
type = "string"
default = ""
}
Anyone in the organization who now needs to create S3 buckets, would need to create a new repo, something of the form:
main.tf
module "aws-s3-john-doe" {
source = "git::https://git#gitlab-address/terraform/aws-s3-iam-combo.git?ref=v0.0.1"
name = "john-doe"
tag_team = "my_team"
tag_environment = "staging"
}
gitlab-ci.yml
stages:
- plan
- apply
plan:
image: hashicorp/terraform
stage: plan
script:
- terraform init
- terraform plan
apply:
image: hashicorp/terraform
stage: apply
script:
- terraform init
- terraform apply
when: manual
only:
- master
And then the pipeline would trigger and when this repo gets merged to master, the resources (S3 and IAM user) would be created and the user would have this IAM user's credentials.
Now the problem is that we have multiple AWS accounts. So say if a dev wants to create an S3 in a certain account, it would not be possible with the above set up as the admin-user, whose creds are in gitlab secrets, is only for one account alone.
Now I don't understand how do I achieve the above requirement of mine. I have the below idea: (Please suggest if there's a better way to do this)
Have multiple different creds set up in gitlab secrets for each AWS account in question
Take user input, specifying the AWS account they want the resources created in, as a variable. So something like say:
main.tf
module "aws-s3-john-doe" {
source = "git::https://git#gitlab-address/terraform/aws-s3-iam-combo.git?ref=v0.0.1"
name = "john-doe"
tag_team = "my_team"
tag_environment = "staging"
aws_account = "account1"
}
And then in the aws-s3-iam-combo.git main.tf somehow read the creds for account1 from the gitlab secrets.
Now I do not know how achieve the above, like how do i read from gitlab the required secret variable etc.
Can someone please help here?
you asked this some time ago, but maybe my idea still helps the one or the other...
You can do this with envsubst (requires the pkg gettext to be installed on your runner or in the Docker image used to run the pipeline).
Here is an example:
First, in the project settings you set your different user accounts as environment variables (project secrets:
SECRET_1: my-secret-1
SECRET_2: my-secret-2
SECRET_3: my-secret-3
Then, create a file that holds a Terraform variable, let's name it vars_template.tf:
variable "gitlab_secrets" {
description = "Variables from GitLab"
type = "map"
default = {
secret_1 = "$SECRET_1"
secret_2 = "$SECRET_2"
secret_3 = "$SECRET_3"
}
}
In your CI pipeline, you can now configure the following:
plan:dev:
stage: plan dev
script:
- envsubst < vars_template.tf > ./vars_envsubst.tf
- rm vars_template.tf
- terraform init
- terraform plan -out "planfile_dev"
artifacts:
paths:
- environments/dev/planfile_dev
- environments/dev/vars_envsubst.tf
apply:dev:
stage: apply dev
script:
- cd environments/dev
- rm vars_template.tf
- terraform init
- terraform apply -input=false "planfile_dev"
dependencies:
- plan:dev
It's important to note that the original vars_template.tf has to be deleted, otherwise Terraform will throw an error that the variable is defined multiple times. You could circumvent this by storing the template file in a directory which is outside the Terraform working directory though.
But from the Terraform state you can see that the variable values where correctly substituted:
"outputs": {
"gitlab_secrets": {
"sensitive": false,
"type": "map",
"value": {
"secret_1": "my-secret-1",
"secret_2": "my-secret-2",
"secret_3": "my-secret-3"
}
}
}
You can then access the values with "${vars.gitlab_secrets["secret_1"]}" in your Terraform resources etc.
UPDATE: Note that this method will store the secrets in the Terraform state file, which can be a potential security issue if the file is stored in an S3 bucket for collaborative work with Terraform. The bucket should at least be encrypted. In addition, it's recommended to limit the access to the files with ACLs so that, e.g., only a user terraform has access to it. And, of course, a user could reveil the secrets via Terraoform outputs...