Importing terraform aws_iam_policy - amazon-web-services

I'm trying to import a terraform aws_iam_policy that gets automatically added by automation I don't own. The import seems to work but once I run a terraform plan I get the following error
* aws_iam_policy.mypolicy1: "policy": required field is not set
I'm running the terraform import as follows.
terraform import aws_iam_policy.mypolicy1 <myarn>
Here is my relevant terraform config
resource "aws_iam_policy" "mypolicy1" {
}
resource "aws_iam_role_policy_attachment" "mypolicy1_attachment`" {
role = "${aws_iam_role.myrole1.name}"
policy_arn = "${aws_iam_policy.mypolicy1.arn}"
}
resource "aws_iam_role" "myrole1" {
name = "myrole1"
assume_role_policy = "${file("../policies/ecs-role.json")}"
}
I double checked that the terraform.tfstate included the policy i'm trying to import. Is there something else I'm missing here?

You still need to provide the required fields in the Terraform configuration for the plan to work.
If you remove the aws_iam_policy resource from your configuration and run a plan after importing the policy you should see that Terraform wants to destroy the policy because it is in the state file but not in the configuration.
Simply setup your aws_iam_policy resource to match the imported policy and then a plan should show no changes.

I finally found a relatively elegant, and universal work-around to address Amazon's poor implementation of the import IAM policy capability. The solution does NOT require that you reverse engineer Amazon, or anybody else's, implementation of the "aws_iam_policy" resource that you want to import.
There are two steps.
Create an aws_iam_policy resource definition that has a "lifecycle" argument, with an ignore_changes list. There are three fields in the aws_iam_policy resource that will trigger a replacement: policy, description and path. Add these three fields to the ignore_changes list.
Import the external IAM policy, and attach it to the resource definition that you created in your resource file.
Resource file (ex: static-resources.tf)
resource "aws_iam_policy" "MyLambdaVPCAccessExecutionRole" {
lifecycle {
prevent_destroy = true
ignore_changes = [policy, path, description]
}
policy = jsonencode({})
}
Import Statement: Using the arn of the IAM policy that you want to import, import the policy and attach it to your resource definition.
terraform import aws_iam_policy.MyLambdaVPCAccessExecutionRole arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole
The magic is the fields that you need to add to the ignore_changes list, and adding a place-holder for the required "policy" argument. Since this is a required field, Terraform won't let you proceed without it, even though this is one of the fields that you told Terraform to ignore any changes to.
Note: If you use modules, you will need to add "module.." to the front on your resource reference. For example
terraform import module.static.aws_iam_policy.MyLambdaVPCAccessExecutionRole arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole

Related

importing aws_iam_policy multiple times

I have created resource stub for importing iam customer managed policy as below.
resource "aws_iam_policy" "customer_managed_policy" {
name = var.customer_managed_policy_name
policy = "{}"
}
The import command used is:
$ terraform import -var 'customer_managed_policy_name=ec2-readonly' aws_iam_policy.customer_managed_policy arn:aws:iam::<account ID>:policy/ec2-readonly
This works fine for first time. But If I want to make it dynamic in order to import any number of policies, I don't know how to do.
Because "aws_iam_policy" resource will use policy name and policy data/json as attributes, for them by using for_each or list, multiple resources can be created but in import command I need to pass policy arn which is different.
I think there is a misunderstanding on how terraform works.
Terraform maps 1 resource to 1 item in state and the state file is used to manage all created resources.
To import "X" resources, "X" resources must exist in your terraform configuration so "X" can be mapped to state.
2 simple ways to achieve this would be by using "count" or "for_each" to map "X" resources to state. Therefore being able to import "X" resources.
Now, it is important to noticed that after you import a resource, if your terraform configuration it's not equal to the imported resource, once you run terraform apply, terraform will be update all imported resources to match your terraform configuration file.

Terraform create and attach aws iam policies

I need to create several iam policies from json files.
So, I've a file called iam_policies.tf with many of these code:
resource "aws_iam_policy" "name" {
name = "policy-name"
description = "Policy desc xxx"
path = "/"
policy = file("${path.module}/_/iam_policies/policy.json")
}
In a module I would like to use these policies as argument of var, but when I try to attach the policy...
resource "aws_iam_role_policy_attachment" "me" {
for_each = toset(var.policies)
role = aws_iam_role.me.name
policy_arn = each.value
}
I get the error: The "for_each" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created. To work around this, use the -target argument to first apply only the resources that the for_each depends on.
This is the module that create policies resources and other resources:
module "admin" {
source = "./repo/module_name"
policies = [
aws_iam_policy.common.arn,
aws_iam_policy.ses_sending.arn,
aws_iam_policy.athena_readonly.arn,
aws_iam_policy.s3_deploy.arn,
]
...
}
I've tried with depends_on but It doesn't works.
I'm using terraform cloud, so I can't use apply -target
How can I do? What's wrong?
Thank you
If you can't use target, you have to separate your deployments into two deployments. First you deploy your policies, and then they will become inputs of the main deployment.

How to properly create gcp service-account with roles in terraform

Here is the terraform code I have used to create a service account and bind a role to it:
resource "google_service_account" "sa-name" {
account_id = "sa-name"
display_name = "SA"
}
resource "google_project_iam_binding" "firestore_owner_binding" {
role = "roles/datastore.owner"
members = [
"serviceAccount:sa-name#${var.project}.iam.gserviceaccount.com",
]
depends_on = [google_service_account.sa-name]
}
Above code worked great... except it removed the datastore.owner from any other service account in the project that this role was previously assigned to. We have a single project that many teams use and there are service accounts managed by different teams. My terraform code would only have our team's service accounts and we could end up breaking other teams service accounts.
Is there another way to do this in terraform?
This of course can be done via GCP UI or gcloud cli without any issue or affecting other SAs.
From terraform docs, "google_project_iam_binding" is Authoritative. Sets the IAM policy for the project and replaces any existing policy already attached. That means that it replaces completely members for a given role inside it.
To just add a role to a new service account, without editing everybody else from that role, you should use the resource "google_project_iam_member":
resource "google_service_account" "sa-name" {
account_id = "sa-name"
display_name = "SA"
}
resource "google_project_iam_member" "firestore_owner_binding" {
project = <your_gcp_project_id_here>
role = "roles/datastore.owner"
member = "serviceAccount:${google_service_account.sa-name.email}"
}
Extra change from your sample: the use of service account resource email generated attribute to remove the explicit depends_on. You don't need the depends_on if you do it like this and you avoid errors with bad configuration.
Terraform can infer the dependency from the use of a variable from another resource. Check the docs here to understand this behavior better.
It's an usual problem with Terraform. Either you do all with it, or nothing. If you are between, unexpected things can happen!!
If you want to use terraform, you have to import the existing into the tfstate. Here the doc for the bindind, and, of course, you have to add all the account in the Terraform file. If not, the binding will be removed, but this time, you will see the deletion in the tf plan.

Terraform read details of existing resource

I am facing an issue in terraform where I want to read details of some existing resource (r1) created via AWS web console.
I am using those details in creation on new resource (r2) via terraform.
Problem is that it is trying to destroy and recreate that resource which is not desired as it will be failed. How can I manage not to destroy and recreate r1 when I do terraform apply.
Here is how I am doing it :
main.tf
resource "aws_lb" "r1"{
}
...
resource "aws_api_gateway_integration" "r2" {
type = "HTTP"
uri = "${aws_lb.r1.dns_name}}/o/v1/multi/get/m/content"
}
first I import that resource
terraform import aws_lb.r1 {my_arn}
next I apply terraform
terraform apply
error
aws_lb.r1: Error deleting LB: ResourceInUse: Load balancer 'my_arn' cannot be deleted because it is currently associated with another service
The import statement is meant for taking control over existing resources in your Terraform setup.
If your only intention is to derive information on existing resources (outside of your Terraform control), data sources are designed specifically for this need:
data "aws_lb" "r1" {
name = "lb_foo"
arn = "some_specific_arn" #you can use any selector you wish to query the correct LB
}
resource "aws_api_gateway_integration" "r2" {
type = "HTTP"
uri = "${data.aws_lb.r1.dns_name}/o/v1/multi/get/m/content"
}
You can add a lifecycle configuration block in the resource "aws_lb" "r1" (see: https://www.terraform.io/docs/configuration/resources.html#lifecycle) to tell Terraform to ignore changes in the resource.
I guess something like this should work:
resource "aws_lb" "r1"{
lifecycle {
ignore_changes = ["*"]
}
}

Terraform: correct way to attach AWS managed policies to a role?

I want to attach one of the pre-existing AWS managed roles to a policy, here's my current code:
resource "aws_iam_role_policy_attachment" "sto-readonly-role-policy-attach" {
role = "${aws_iam_role.sto-test-role.name}"
policy_arn = "arn:aws:iam::aws:policy/ReadOnlyAccess"
}
Is there a better way to model the managed policy and then reference it instead of hardcoding the ARN? It just seems like whenever I hardcode ARNs / paths or other stuff like this, I usually find out later there was a better way.
Is there something already existing in Terraform that models managed policies? Or is hardcoding the ARN the "right" way to do it?
The IAM Policy data source is great for this. A data resource is used to describe data or resources that are not actively managed by Terraform, but are referenced by Terraform.
For your example, you would create a data resource for the managed policy as follows:
data "aws_iam_policy" "ReadOnlyAccess" {
arn = "arn:aws:iam::aws:policy/ReadOnlyAccess"
}
The name of the data source, ReadOnlyAccess in this case, is entirely up to you. For managed policies I use the same name as the policy name for the sake of consistency, but you could just as easily name it readonly if that suits you.
You would then attach the IAM policy to your role as follows:
resource "aws_iam_role_policy_attachment" "sto-readonly-role-policy-attach" {
role = "${aws_iam_role.sto-test-role.name}"
policy_arn = "${data.aws_iam_policy.ReadOnlyAccess.arn}"
}
When using values that Terraform itself doesn't directly manage, you have a few options.
The first, simplest option is to just hard-code the value as you did here. This is a straightforward answer if you expect that the value will never change. Given that these "canned policies" are documented, built-in AWS features they likely fit this criteria.
The second option is to create a Terraform module and hard-code the value into that, and then reference this module from several other modules. This allows you to manage the value centrally and use it many times. A module that contains only outputs is a common pattern for this sort of thing, although you could also choose to make a module that contains an aws_iam_role_policy_attachment resource with the role set from a variable.
The third option is to place the value in some location that Terraform can retrieve values from, such as Consul, and then retrieve it from there using a data source. With only Terraform in play, this ends up being largely equivalent to the second option, though it means Terraform will re-read it on each refresh rather than only when you update the module using terraform init -upgrade, and thus this could be a better option for values that change often.
The fourth option is to use a specialized data source that can read the value directly from the source of truth. Terraform does not currently have a data source for fetching information on AWS managed policies, so this is not an option for your current situation, but can be used to fetch other AWS-defined data such as the AWS IP address ranges, service ARNs, etc.
Which of these is appropriate for a given situation will depend on how commonly the value changes, who manages changes to it, and on the availability of specialized Terraform data sources.
I got a similar situation, and I don't want to use the arn in my terraform script for two reasons,
If we do it on the Web console, we don't really look at the arn, we search the role name, then attach it to the role
The arn is not easy to remember, looks like is not for human
I would rather use the policy name, but not the arn, here is my example
# Get the policy by name
data "aws_iam_policy" "required-policy" {
name = "AmazonS3FullAccess"
}
# Create the role
resource "aws_iam_role" "system-role" {
name = "data-stream-system-role"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Sid = ""
Principal = {
Service = "ec2.amazonaws.com"
}
},
]
})
}
# Attach the policy to the role
resource "aws_iam_role_policy_attachment" "attach-s3" {
role = aws_iam_role.system-role.name
policy_arn = data.aws_iam_policy.required-policy.arn
}