Terraform resolving loop - amazon-web-services

I want AWS instance that is allowed to read its own tags, but not of any other resources? Normally, idea of instance being allowed to do something is expressed by iam_role and aws_profile_instance, but when writing policy for the role, I can't refer to ARN of instance, since it creates loop.
It makes sense: normally, Terraform creates resources in order, and once created it never revisits them. What I want requires creating instance without iam role, and attach role to instance after instance is created.
Is it possible with Terraform?
EDIT: (minimal example):
+; cat problem.tf
resource "aws_instance" "problem" {
instance_type = "t2.medium"
ami = "ami-08d489468314a58df"
iam_instance_profile = aws_iam_instance_profile.problem.name
}
resource "aws_iam_policy" "problem" {
name = "problem"
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{ Effect = "Allow"
Action = ["ssm:GetParameters"]
Resource = [aws_instance.problem.arn]
}
]
})
}
resource "aws_iam_role" "problem" {
name = "problem"
managed_policy_arns = [aws_iam_policy.problem.id]
# Copy-pasted from aws provider documentation. AWS is overcomplicated.
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = {
Service = "ec2.amazonaws.com"
}
},
]
})
}
resource "aws_iam_instance_profile" "problem" {
name = "problem"
role = aws_iam_role.problem.name
}
+; terraform apply -refresh=false
Acquiring state lock. This may take a few moments...
Releasing state lock. This may take a few moments...
╷
│ Error: Cycle: aws_iam_instance_profile.problem, aws_instance.problem, aws_iam_policy.problem, aws_iam_role.problem
│
│
╵

The problem here arises because you've used the managed_policy_arns shorthand to attach the policy to the role in the same resource that declares the role. That shorthand can be convenient in simple cases, but it can also create cycle problems as you've seen here because it causes the role to refer to the policy, rather than the policy to refer to the role.
The good news is that you can avoid a cycle here by declaring that relationship in the opposite direction, either by using the separate aws_iam_policy_attachment resource type -- which only declares the connection between the role and the policy -- or by using aws_iam_role_policy to declare a policy that's directly attached to the role. You only really need the separate attachment if you intend to attach the same policy to multiple principals, so I'm going to show the simpler approach with aws_iam_role_policy here:
resource "aws_instance" "example" {
instance_type = "t2.medium"
ami = "ami-08d489468314a58df"
iam_instance_profile = aws_iam_instance_profile.example.name
}
resource "aws_iam_instance_profile" "example" {
name = "example"
role = aws_iam_role.example.name
}
resource "aws_iam_role" "example" {
name = "example"
# Allow the EC2 service to assume this role, so
# that the EC2 instance can act as it through its
# profile.
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = {
Service = "ec2.amazonaws.com"
}
},
]
})
}
resource "aws_iam_role_policy" "example" {
name = "example"
role = aws_iam_role.example.id
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Effect = "Allow"
Action = ["ssm:GetParameters"]
Resource = [aws_instance.example.arn]
},
]
})
}
Now all of the dependency edges go in the correct order to avoid a cycle.
The policy won't be connected to the role until both the role and the instance are both created, so it's important to consider here that the software running in the instance might start up before the role's policy is assigned, and so it should be prepared to encounter access violation errors for some time after boot and keep trying periodically until it succeeds, rather than aborting at the first error.
If this is part of a shared module that's using the functionality of the EC2 instance as part of the abstraction it's creating, it can help the caller of the module to be explicit about that hidden dependency on the aws_iam_role_policy by including it in any output values that refer to behavior of the EC2 instance that won't work until the role policy is ready. For example, if the EC2 instance is providing an HTTPS service on port 443 that won't work until the policy is active:
output "service_url" {
value = "https://${aws_instance.example.private_ip}/"
# Anything which refers to this output value
# should also wait until the role policy is
# created before taking any of its actions,
# even though Terraform can't "see" that
# dependency automatically.
depends_on = [aws_iam_role_policy.example]
}

Related

Terraform recreates API permissions for Lambda on each apply causing downtime (lambda module, serverless framework, VPC)

I have a Lambda created via terraform aws lambda module. It points to a versioned Lambda because I employ reserved concurrency. Also it resides in a VPC.
The config looks like so:
module "my-lambda" {
source = "terraform-aws-modules/lambda/aws"
version = "~> v1.45.0"
function_name = "${local.lambda_name}"
description = local.lambda_name
handler = "handler.handler"
runtime = "python3.8"
hash_extra = local.lambda_name
attach_tracing_policy = true
tracing_mode = "Active"
publish = true
vpc_security_group_ids = [
// required VPC security groups
]
vpc_subnet_ids = var.private_subnet_ids
source_path = [
// ... abriged
]
build_in_docker = true
provisioned_concurrent_executions = var.provisioned_concurrency_lambdas
create_current_version_allowed_triggers = true
create_unqualified_alias_allowed_triggers = false
allowed_triggers = {
APIGateway = {
service = "apigateway"
source_arn = "${module.my_api_gateway.this_apigatewayv2_api_execution_arn}/*"
}
}
attach_policies = true
policies = [
// policies needed for a VPC lambda
]
}
I have found that in terraform plan, even if I do not do any changes and repeatedly issue terraform plan, this replacements are occurring - which leads to re-creation of API Gateway permissions and essentially a small downtime:
# module.my_entire_api.module.my-lambda.aws_lambda_permission.current_version_triggers["APIGateway"] must be replaced
-/+ resource "aws_lambda_permission" "current_version_triggers" {
~ id = "APIGateway" -> (known after apply)
~ qualifier = "1" -> (known after apply) # forces replacement
# (5 unchanged attributes hidden)
}
# module.my_entire_api.module.my-lambda.aws_lambda_provisioned_concurrency_config.current_version[0] must be replaced
-/+ resource "aws_lambda_provisioned_concurrency_config" "current_version" {
~ id = "env-my-lambda:1" -> (known after apply)
~ qualifier = "1" -> (known after apply) # forces replacement
# (2 unchanged attributes hidden)
}
There are some other Lambdas that do not run in VPC. Presently I do not see such effect in these, while I am not completely sure that it never happens.
To be certain I do not care about concurrency config, as recreation of it does not cause downtime. But I want to configure the module such that aws_lambda_permission does not get re-created. How can I possibly do that?
An issue in terraform-provider-aws : terraform-provider-aws 3.13.0 and later including 3.25.0 cause lambdas in a VPC to be updated on every apply #17385
From the documentation How to deploy and manage Lambda Functions?
publish = true
Typically, Lambda Function resource updates when source code changes. If publish = true is specified a new Lambda Function version will also be created.
publish flag
variable "publish" {
description = "Whether to publish creation/change as new Lambda Function Version."
type = bool
default = false
}
aws_lambda_permission
resource "aws_lambda_permission" "current_version_triggers" {
for_each = var.create && var.create_function && !var.create_layer && var.create_current_version_allowed_triggers ? var.allowed_triggers : {}
function_name = aws_lambda_function.this[0].function_name
qualifier = aws_lambda_function.this[0].version
So every time you deploy a new version is being deployed which is referenced in the corresponding resource to update the policy. Hence it is triggering updates every time.
In AWS Lambda function, what is the difference between deploy and publish?
Depending on where you are deriving your context for deploy and publish, normally deploy means redeploying your lambda with new code whereas publish is increasing your lambda version (not redeploying code).
The problem I was facing boils down to several things.
When you do Provisioned Concurrency, you must "publish" your lambda so they have a proper version qualifier (something like "1" and NOT $LATEST), therefore Lambda permissions to allow Gateway to call Lambda are tied to specific Lambda version. When you make another version, these permissions are destroyed and created anew for the new Lambda version. create_before_destroy lifecycle flag can possibly help. I haven't seen these recreated for non-VPC lambdas when there are no changes; when Lambda is changed, there are few minutes between deleting and recreating the resrved concurrency and permissions inside Lambda for the API Gateway.
VPC Lambdas, in addition, experience a recreation of concurrency and permissions even if Lambda not changed, Terraform bug https://github.com/hashicorp/terraform-provider-aws/issues/17385.
Solution seems to be to not deal with permissions of Lambdas at all, but instead, provide API Gateway "credentials" (aka Role with Lambda InvokeFunction rights) that allow it to call Lambdas. This way, when an AWS Gateway "integration" (= Lambda) is called, it assumes the Role. Permissions on Lambda side are not needed in such case. My tests show that in that case the sequence of updating the Lambda is correct: no unecessary recreations of resources for VPC lambdas, and when a Lambda is being updated, first a new version is deployed, and then API Gateway shifts to it (hence, no downtime happens). The production tests under certain load also confirmed that we do not see an outage in practice.
Here's the snippet for API Gateway configuration that permits Lambda invocations. It follows a recipe found at https://medium.com/#jun711.g/aws-api-gateway-invoke-lambda-function-permission-6c6834f14b61.
resource "aws_iam_role" "api_gateway_credentials_call_lambda" {
assume_role_policy = jsonencode({
Version = "2012-10-17",
Statement = [
{
Effect = "Allow",
Principal = {
Service = "lambda.amazonaws.com"
},
Action = "sts:AssumeRole"
},
{
Effect = "Allow",
Principal = {
Service = "apigateway.amazonaws.com"
},
Action = "sts:AssumeRole"
}
]
})
inline_policy {
name = "permission-apigw-lambda-invokefunction"
policy = jsonencode({
"Version" : "2012-10-17",
"Statement" : [
{
Effect = "Allow",
Action = "lambda:InvokeFunction",
Resource = "arn:aws:lambda:*:${data.aws_caller_identity.current.account_id}:function:*"
}
]
})
}
}
Note that last Resource = instruction would allow all Lambdas to be called by this Role. You might want to restrict these rights to a sub-set of lambdas for increased security and less of human error.
Having this Role set up, I configure API gateway using a popular Module apigateway-v2 from serverless.tf framework:
module "api_gateway" {
source = "terraform-aws-modules/apigateway-v2/aws"
version = "~> 0.14.0"
# various parameters ...
# Routes and integrations
integrations = {
"GET /myLambda" = {
integration_type = "AWS_PROXY"
integration_http_method = "POST"
payload_format_version = "2.0"
lambda_arn = my_lambda_qualified_arn
# This line enables the permissions:
credentials_arn = aws_iam_role.api_gateway_credentials_call_lambda.arn
}

Cannot create elasticSearch Domain using terraform

I'm trying to create elasticsearch cluster using terraform.
Using terraform 0.11.13
Please can someone point out why I'm not able to create log groups? What is the Resource Access Policy? is it the same as the data "aws_iam_policy_document" I'm creating?
Note: I'm using elasticsearch_version = "7.9"
code:
resource "aws_cloudwatch_log_group" "search_test_log_group" {
name = "/aws/aes/domains/test-es7/index-logs"
}
resource "aws_elasticsearch_domain" "amp_search_test_es7" {
domain_name = "es7"
elasticsearch_version = "7.9"
.....
log_publishing_options {
cloudwatch_log_group_arn = "${aws_cloudwatch_log_group.search_test_log_group.arn}"
log_type = "INDEX_SLOW_LOGS"
enabled = true
}
access_policies = "${data.aws_iam_policy_document.elasticsearch_policy.json}"
}
data "aws_iam_policy_document" "elasticsearch_policy" {
version = "2012-10-17"
statement {
effect = "Allow"
principals {
identifiers = ["*"]
type = "AWS"
}
actions = ["es:*"]
resources = ["arn:aws:es:us-east-1:xxx:domain/test_es7/*"]
}
statement {
effect = "Allow"
principals {
identifiers = ["es.amazonaws.com"]
type = "Service"
}
actions = [
"logs:PutLogEvents",
"logs:PutLogEventsBatch",
"logs:CreateLogStream",
]
resources = ["arn:aws:logs:*"]
}
}
I'm getting this error
aws_elasticsearch_domain.test_es7: Error creating ElasticSearch domain: ValidationException: The Resource Access Policy specified for the CloudWatch Logs log group /aws/aes/domains/test-es7/index-logs does not grant sufficient permissions for Amazon Elasticsearch Service to create a log stream. Please check the Resource Access Policy.
For ElasticSearch (ES) to be able to write to CloudWatch (CW) Logs, you have to provide a resource-based policy on your CW logs.
This is achieved using aws_cloudwatch_log_resource_policy which is missing from your code.
In fact, TF docs have a ready to use example of how to do it for ES, thus you should be able to just copy and paste it.
ES access policies are different from CW log policies, as they determine who can do what on your ES domain. Thus, you would have to adjust that part of your code to meet your requirements.

How can i provision IAM Role in aws with terraform?

as i'm new with terraform, i'd like to ask your help once i got stuck for almost a day.
When trying to apply a IAC to deploy a Nginx service into a ECS(EC2 launch type) on aws i'm facing the following problem:
Error: Error creating IAM Role nginx-iam_role: MalformedPolicyDocument: Has prohibited field Resource status code: 400, request id: 0f1696f4-d86b-4ad1-ba3b-9453f3beff2b
I have already checked the documentation and the syntax is fine. What else could be wrong?
Following the snippet code creating the IAM infra:
provider "aws" {
region = "us-east-2"
}
data "aws_iam_policy_document" "nginx-doc-policy" {
statement {
sid = "1"
actions = [
"ec2:*"
]
resources = ["*"]
}
}
resource "aws_iam_role" "nginx-iam_role" {
name = "nginx-iam_role"
path = "/"
assume_role_policy = "${data.aws_iam_policy_document.nginx-doc-policy.json}"
}
resource "aws_iam_group_policy" "nginx-group-policy" {
name = "my_developer_policy"
group = "${aws_iam_group.nginx-iam-group.name}"
policy = "${data.aws_iam_policy_document.nginx-doc-policy.json}"
}
resource "aws_iam_group" "nginx-iam-group" {
name = "nginx-iam-group"
path = "/"
}
resource "aws_iam_user" "nginx-user" {
name = "nginx-user"
path = "/"
}
resource "aws_iam_user_group_membership" "nginx-membership" {
user = "${aws_iam_user.nginx-user.name}"
groups = ["${aws_iam_group.nginx-iam-group.name}"]
}
If you guys need the remaining code: https://github.com/atilasantos/iac-terraform-nginx.git
You are trying to use the aws_iam_policy_document.nginx-doc-policy policy as an assume_role_policy which does not work as an assume role policy needs to define a principal that you trust and want to grant access to assume the role you are creating.
An assume role policy could look like this is you want to grant access to the role to EC2 instances via instance profiles. At the end you can attach your initial role via a new resource as an inline policy to the role:
data "aws_iam_policy_document" "instance-assume-role-policy" {
statement {
actions = ["sts:AssumeRole"]
principals {
type = "Service"
identifiers = ["ec2.amazonaws.com"]
}
}
}
resource "aws_iam_role" "nginx-iam_role" {
name = "nginx-iam_role"
path = "/"
assume_role_policy = data.aws_iam_policy_document.instance-assume-role-policy.json
}
resource "aws_iam_role_policy" "role_policy" {
name = "role policy"
role = aws_iam_role.nginx-iam_role.id
policy = data.aws_iam_policy_document.nginx-doc-policy.json
}
Instead of attaching the policy as an inline policies you can also create an IAM Policy and attach it to the various iam resources. (e.g.: aws_iam_policy and aws_iam_role_policy_attachment for roles.)
We created a bunch of open-source IAM modules (and others) to make IAM handling easier: Find them here on github. But there are more modules out there that you can try.

Terraform: Attaching an unmanaged IAM role

Terraform version: 12
We have a legacy, unmanaged by Terraform IAM role that I'd like to reference from an aws_iam_policy_attachment block and I attempted the following:
resource "aws_iam_policy_attachment" "example-attach" {
name = "example-attach"
roles = [
aws_iam_role.managed-role.name,
"arn:aws:iam::1234567890:role/unmanaged-role"
]
policy_arn = aws_iam_policy.example-policy.arn
}
Dry-run works fine but when applying TF says:
– ValidationError: The specified value for roleName is invalid. It must contain only alphanumeric characters and/or the following: +=,.#_-
Is there a way I can just reference the unmanaged role without defining it in TF? Or is there some non-destructive way of declaring it that doesn't change anything to do with the unmanaged role?
In your roles, you are providing role ARN, not role name.
Therefore, instead of ARN, you should use its name:
resource "aws_iam_policy_attachment" "example-attach" {
name = "example-attach"
roles = [
aws_iam_role.managed-role.name,
"unmanaged-role"
]
policy_arn = aws_iam_policy.example-policy.arn
}
You can also use data_source
data "aws_iam_role" "example" {
name = "unmanaged-role"
}
and the reference it in your resource:
resource "aws_iam_policy_attachment" "example-attach" {
name = "example-attach"
roles = [
aws_iam_role.managed-role.name,
data.aws_iam_role.example.name
]
policy_arn = aws_iam_policy.example-policy.arn
}

How to attach multiple IAM policies to IAM roles using Terraform?

I want to attach multiple IAM Policy ARNs to a single IAM Role.
One method is to create a new policy with privileges of all the policies (multiple policies).
But in AWS, we have some predefined IAM policies like AmazonEC2FullAccess, AmazomS3FullAccess, etc. I want to use a combination of these for my role.
I could not find a way to do so in the Terraform documentation.
As per documentation we can use aws_iam_role_policy_attachment to attach a policy to a role, but not multiple policies to a role as this is available via AWS console.
Please let me know if there is a method to do the same or is it still a feature to be added.
The Terraform version I use is v0.9.5
For Terraform versions >= 0.12 the cleanest way to add multiple policies is probably something like this:
resource "aws_iam_role_policy_attachment" "role-policy-attachment" {
for_each = toset([
"arn:aws:iam::aws:policy/AmazonEC2FullAccess",
"arn:aws:iam::aws:policy/AmazonS3FullAccess"
])
role = var.iam_role_name
policy_arn = each.value
}
As described in Pranshu Verma's answer, the list of policies can also be put into a variable.
Using for_each in favor of count has the advantage, that insertions to the list are properly recognized by terraform so that it would really only add one policy, while with count all policies after the insertion would be changed (this is described in detail in this blog post)
Thanks Krishna Kumar R for the hint.
A little more polished answer I reached from your answer.
# Define policy ARNs as list
variable "iam_policy_arn" {
description = "IAM Policy to be attached to role"
type = "list"
}
# Then parse through the list using count
resource "aws_iam_role_policy_attachment" "role-policy-attachment" {
role = "${var.iam_role_name}"
count = "${length(var.iam_policy_arn)}"
policy_arn = "${var.iam_policy_arn[count.index]}"
}
And finally the list of policies should be specified in *.tfvars file or in command line using -var, for example:
iam_policy_arn = [
"arn:aws:iam::aws:policy/AmazonEC2FullAccess", "arn:aws:iam::aws:policy/AmazonS3FullAccess"]
Did you try something like this:
resource "aws_iam_role" "iam_role_name" {
name = "iam_role_name"
}
resource "aws_iam_role_policy_attachment" "mgd_pol_1" {
name = "mgd_pol_attach_name"
role = "${aws_iam_role.iam_role_name.name}"
policy_arn = "${aws_iam_policy.mgd_pol_1.arn}"
}
resource "aws_iam_role_policy_attachment" "mgd_pol_2" {
name = "mgd_pol_attach_name"
role = "${aws_iam_role.iam_role_name.name}"
policy_arn = "${aws_iam_policy.mgd_pol_2.arn}"
}
Adding another option, which is similar to the excepted answer but instead of:
policy_arn = "${var.iam_policy_arn[count.index]}"
You can use the element function:
policy_arn = "${element(var.iam_policy_arn,count.index)}"
I think that in some cases (like a project with a large amount of code) this could be more readable.
In my case I added multiple statements in one policy document:
data "aws_iam_policy_document" "sns-and-sqs-policy" {
statement {
sid = "AllowToPublishToSns"
effect = "Allow"
actions = [
"sns:Publish",
]
resources = [
data.resource.arn,
]
}
statement {
sid = "AllowToSubscribeFromSqs"
effect = "Allow"
actions = [
"sqs:changeMessageVisibility*",
"sqs:SendMessage",
"sqs:ReceiveMessage",
"sqs:GetQueue*",
"sqs:DeleteMessage",
]
resources = [
data.resource.arn,
]
}
}
resource "aws_iam_policy" "sns-and-sqs" {
name = "sns-and-sqs-policy"
policy = data.aws_iam_policy_document.sns-and-sqs-policy.json
}
resource "aws_iam_role_policy_attachment" "sns-and-sqs-role" {
role = "role_name"
policy_arn = aws_iam_policy.sns-and-sqs.arn
}
simply combine your policies in one policy
1.Use a datasource with for loop to get all the policies
data "aws_iam_policy" "management_group_policy" {
for_each = toset(["Billing", "AmazonS3ReadOnlyAccess"])
name = each.value
}
2.Attach to role as so;
resource "aws_iam_role_policy_attachment" "dev_role_policy_attachment" {
for_each = data.aws_iam_policy.management_group_policy
role = aws_iam_role.role.name
policy_arn = each.value.arn
}
This is an example how i did it:
resource "aws_iam_group_policy_attachment" "policy_attach_example" {
for_each = aws_iam_policy.example
group = aws_iam_group.example.name
policy_arn = each.value["arn"]
}
So basically "aws_iam_policy.example" is a list of policies that i have made in the same way, with for_each
Hope that this help you, i know i come late but i had this simillar issue