Terraform: how to import AWS cross-account resource? - amazon-web-services

How do I import an existing AWS resource into Terraform state, where that resource exists within a different account?
terraform import module.mymodule.aws_iam_policy.policy arn:aws:iam::123456789012:policy/mypolicy
gives the following error:
Error: Cannot import non-existent remote object
While attempting to import an existing object to aws_iam_policy.policy, the
provider detected that no object exists with the given id. Only pre-existing
objects can be imported; check that the id is correct and that it is
associated with the provider's configured region or endpoint, or use
"terraform apply" to create a new remote object for this resource.
The resource was created in one account using a different provisioner defined within a module called mymodule:
module "mymodule" {
// ... define variables for the module
}
// within the module
provider "aws" {
alias = "cross-account"
region = "eu-west-2"
assume_role {
role_arn = var.provider_role_arn
}
}
resource "aws_iam_policy" "policy" {
provider = "aws.cross-account"
name = var.policy-name
path = var.policy-path
description = var.policy-description
policy = var.policy-document
}
How do I import cross-account resources?
Update: using the -provider flag, I get a different error:
Error: Provider configuration not present
To work with module.mymodule.aws_iam_policy.policy (import
id "arn:aws:iam::123456789012:policy/somepolicytoimport") its original provider
configuration at provider.aws.cross-account is required, but it has been
removed. This occurs when a provider configuration is removed while objects
created by that provider still exist in the state. Re-add the provider
configuration to destroy
module.mymodule.aws_iam_policy.policy (import id
"arn:aws:iam::123456789012:policy/somepolicytoimport"), after which you can remove
the provider configuration again.

I think you have to assume the role of the second account as follows.
provider "aws" {
assume_role {
role_arn = "arn:aws:iam::ACCOUNT_ID:role/ROLE_NAME"
session_name = "SESSION_NAME"
external_id = "EXTERNAL_ID"
}
}
[1] : https://www.terraform.io/docs/providers/aws/index.html

I've got the same error while trying to import AWS acm certificate.
As the first step, before importing the resource, you need to create its configuration in the root module (or other relevant module):
resource "aws_acm_certificate" "cert" {
# (resource arguments)
}
Or you'll got the following error:
Error: resource address "aws_acm_certificate.cert" does not exist in
the configuration.
Then you can import the resource by providing its relevant arn:
$ terraform import aws_acm_certificate.cert <certificate-arn>
Like #ydaetskcoR mentioned in the comments - you don't need to assume the role of the second account if you're using v0.12.10+.
But Terraform do need Access credentials to the second account - so please make sure you provide the relevant account's credentials (and not the source account credentials) or you'll be stuck with the Error: Cannot import non-existent remote object for a few hours like me (:

You can use multiple provider configurations if you have credentials for the another account.
# This is used by default
provider "aws" {
region = "us-east-1"
access_key = "my-access-key"
secret_key = "my-secret-key"
}
provider "aws" {
alias = "another_account"
region = "us-east-1"
access_key = "another-account-access-key"
secret_key = "another-account-secret-key"
}
# To use the other configuration
resource "aws_instance" "foo" {
provider = aws.another_account
# ...
}
Here the documentation: https://developer.hashicorp.com/terraform/language/providers/configuration#alias-multiple-provider-configurations

Related

Terraform Provider issue: registry.terraform.io/hashicorp/s3

I current have code that I have been using for quiet sometime that calls a custom S3 module. Today I tried to run the same code and I started getting an error regarding the provider.
╷ │ Error: Failed to query available provider packages │ │ Could not
retrieve the list of available versions for provider hashicorp/s3:
provider registry registry.terraform.io does not have a provider named
│ registry.terraform.io/hashicorp/s3 │ │ All modules should specify
their required_providers so that external consumers will get the
correct providers when using a module. To see which modules │ are
currently depending on hashicorp/s3, run the following command: │
terraform providers
Doing some digging seems that terraform is looking for a module registry.terraform.io/hashicorp/s3, which doesn't exist.
So far, I have tried the following things:
Validated that the S3 Resource code meets the standards of the upgrade Hashicorp did to 4.x this year. Plus I have been using it for a couple of months with no issues.
Delete .terraform directory and rerun terraform init (No success same error)
Delete .terraform directory and .terraform.hcl lock and run terraform init -upgrade (No Success)
I have tried to update my provider's file to try to force an upgrade (no Success)
I tried to change the provider to >= current version to pull the latest version with no success
Reading further, it refers to a caching problem of the terraform modules. I tried to run terraform providers lock and received this error.
Error: Could not retrieve providers for locking │ │ Terraform failed
to fetch the requested providers for darwin_amd64 in order to
calculate their checksums: some providers could not be installed: │ -
registry.terraform.io/hashicorp/s3: provider registry
registry.terraform.io does not have a provider named
registry.terraform.io/hashicorp/s3.
Kind of at my wits with what could be wrong. below is a copy of my version.tf which I changed from providers.tf based on another post I was following:
version.tf
# Configure the AWS Provider
provider "aws" {
region = "us-east-1"
use_fips_endpoint = true
}
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 4.9.0"
}
local = {
source = "hashicorp/local"
version = "~> 2.2.1"
}
}
required_version = ">= 1.2.0" #required terraform version
}
S3 Module
I did not include locals, outputs, or variables unless someone thinks we need to see them. As I said before, the module was running correctly until today. Hopefully, this is all you need for the provider's issue. Let me know if other files are needed.
resource "aws_s3_bucket" "buckets" {
count = length(var.bucket_names)
bucket = lower(replace(replace("${var.bucket_names[count.index]}-s3", " ", "-"), "_", "-"))
force_destroy = var.bucket_destroy
tags = local.all_tags
}
# Set Public Access Block for each bucket
resource "aws_s3_bucket_public_access_block" "bucket_public_access_block" {
count = length(var.bucket_names)
bucket = aws_s3_bucket.buckets[count.index].id
block_public_acls = var.bucket_block_public_acls
ignore_public_acls = var.bucket_ignore_public_acls
block_public_policy = var.bucket_block_public_policy
restrict_public_buckets = var.bucket_restrict_public_buckets
}
resource "aws_s3_bucket_acl" "bucket_acl" {
count = length(var.bucket_names)
bucket = aws_s3_bucket.buckets[count.index].id
acl = var.bucket_acl
}
resource "aws_s3_bucket_versioning" "bucket_versioning" {
count = length(var.bucket_names)
bucket = aws_s3_bucket.buckets[count.index].id
versioning_configuration {
status = "Enabled"
}
}
resource "aws_s3_bucket_lifecycle_configuration" "bucket_lifecycle_rule" {
count = length(var.bucket_names)
bucket = aws_s3_bucket.buckets[count.index].id
rule {
id = "${var.bucket_names[count.index]}-lifecycle-${count.index}"
status = "Enabled"
expiration {
days = var.bucket_backup_expiration_days
}
transition {
days = var.bucket_backup_days
storage_class = "GLACIER"
}
}
}
# AWS KMS Key Server Encryption
resource "aws_s3_bucket_server_side_encryption_configuration" "bucket_encryption" {
count = length(var.bucket_names)
bucket = aws_s3_bucket.buckets[count.index].id
rule {
apply_server_side_encryption_by_default {
kms_master_key_id = aws_kms_key.bucket_key[count.index].arn
sse_algorithm = var.bucket_sse
}
}
}
Looking for any other ideas I can use to fix this issue. thank you!!
Although you haven't included it in your question, I'm guessing that somewhere else in this Terraform module you have a block like this:
resource "s3_bucket" "example" {
}
For backward compatibility with modules written for older versions of Terraform, terraform init has some heuristics to guess what provider was intended whenever it encounters a resource that doesn't belong to one of the providers in the module's required_providers block. By default, a resource "belongs to" a provider by matching the prefix of its resource type name -- s3 in this case -- to the local names chosen in the required_providers block.
Given a resource block like the above, terraform init would notice that required_providers doesn't have an entry s3 = { ... } and so will guess that this is an older module trying to use a hypothetical legacy official provider called "s3" (which would now be called hashicorp/s3, because official providers always belong to the hashicorp/ namespace).
The correct name for this resource type is aws_s3_bucket, and so it's important to include the aws_ prefix when you declare a resource of this type:
resource "aws_s3_bucket" "example" {
}
This resource is now by default associated with the provider local name "aws", which does match one of the entries in your required_providers block and so terraform init will see that you intend to use hashicorp/aws to handle this resource.
My colleague and I finally found the problem. Turns out that we had a data call to the S3 bucket. Nothing was wrong with the module but the place I was calling the module had a local.tf action where I was calling s3 in a legacy format see the change below:
WAS
data "s3_bucket" "MyResource" {}
TO
data "aws_s3_bucket" "MyResource" {}
Appreciate the responses from everyone. Resource was the root of the problem but forgot that data is also a resource to check.

terraform ressources dependency management with google cloud iam

I am still in the process of learning terraform.
I am trying to deploy a cloudSQL database and provide a default service account to access it.
the following piece of code does not work :
# create default service account
resource "google_service_account" "default_service_account" {
account_id = "${var.database_name}-${random_id.db_name_suffix.hex}"
display_name = "Cloud SQL default Service Account for ${var.database_name}-${random_id.db_name_suffix.hex}"
}
# grant role sqlUser for default service account
resource "google_project_iam_member" "iam_binding_default_service_account" {
project = var.project_id
role = "roles/cloudsql.instanceUser"
member = "serviceAccount:${default_service_account.account_id}.${module.project.project_id}.iam.gserviceaccount.com"
depends_on = [
google_service_account.default_service_account,
]
}
terraform plan complains with :
Error: Reference to undeclared resource
on database.tf line 78, in resource "google_project_iam_member" "iam_binding_default_service_account":
78: member = "serviceAccount:${default_service_account.account_id}.${module.project.project_id}.iam.gserviceaccount.com"
A managed resource "default_service_account" "account_id" has not been
declared in the root module.
I do not understand why the depends_on piece of code does not seem to work and why terraform does not create the default_service_account before trying to populate the iam_binding_default_service_account ?
It should be (forgot google_service_account):
member = "serviceAccount:${google_service_account.default_service_account.account_id}.${module.project.project_id}.iam.gserviceaccount.com}"

Terraform Create resource in Child AWS Account

My goal is to create a Terraform Module which creates a Child AWS account and creates a set of resources inside the account (for example, AWS Config rules).
The account is created with the following aws_organizations_account definition:
resource "aws_organizations_account" "account" {
name = "my_new_account"
email = "john#doe.org"
}
And an example aws_config_config_rule would be something like:
resource "aws_config_config_rule" "s3_versioning" {
name = "my-config-rule"
description = "Verify versioning is enabled on S3 Buckets."
source {
owner = "AWS"
source_identifier = "S3_BUCKET_VERSIONING_ENABLED"
}
scope {
compliance_resource_types = ["AWS::S3::Bucket"]
}
}
However, doing this creates the AWS Config rule in the master account, not the newly created child account.
How can I define the config rule to apply to the child account?
So, I was actually able to achieve this by defining a new provider in the module which assumes the OrganizationAccountAccessRole inside the newly created account.
Here's an example:
// Define new account
resource "aws_organizations_account" "my_new_account" {
name = "my_new_account"
email = "john#doe.org"
}
provider "aws" {
/* other provider config */
assume_role {
// Assume the organization access role
role_arn = "arn:aws:iam::${aws_organizations_account.my_new_account.id}:role/OrganizationAccountAccessRole"
}
alias = "my_new_account"
}
resource "aws_config_config_rule" "s3_versioning" {
// Tell resource to use the new provider
provider = aws.my_new_account
name = "my-config-rule"
description = "Verify versioning is enabled on S3 Buckets."
source {
owner = "AWS"
source_identifier = "S3_BUCKET_VERSIONING_ENABLED"
}
scope {
compliance_resource_types = ["AWS::S3::Bucket"]
}
}
However, it should be noted that defining the provider inside the module leads to a few quirks, notably once you source this module you cannot delete this module. If you do it will throw a Error: Provider configuration not present since you will have also removed the provider definition.
But, if you don't plan on removing these accounts (or are okay with doing it manually when needed) then this should be good!

How to deploy a Terraform module with many roles?

I'm currently working on an AWS terraform project where I have an array of ROLE IDs (as variables) for different accounts.
variable "slave_account_id" {
default = ["5686435678", "9889865446"]
}
Each of these roles allow a my current AWS account (linked with terraform) to deploy a module on those accounts (assuming for each account the role id)
Thus, I would like to create different providers for each role based on the variable array "slave_account_id".
I tried to do it this way:
provider "aws" {
counter = "${length(var.slave_account_id)}"
alias = "aws-assume-${counter.index}"
region = "eu-west-1"
assume_role {
role_arn = "arn:aws:iam::${var.slave_account_id[counter.index]}:role/slave_role_for_master"
session_name = "${var.slave_session_name[counter.index]}"
external_id = "EXTERNAL_ID"
}
}
This way I would have planned to use this code inside my module:
module "my_super_module" {
counter = "${length(var.slave_account_id)}"
providers = {
aws = "aws.aws-assume-${counter.index}"
}
[...]
}
But this doesn't work (from what I understood I cannot 'concatenate" variable inside the alias of a provider because provider has to be defined before we can interpolate).
Here is the execution result (error du to alias section of the provider):
Error: Invalid provider configuration alias
An alias must be a valid name. A name must start with a letter and may contain
only letters, digits, underscores, and dashes.
Error: Duplicate provider configuration
on main.tf line 5:
5: provider "aws" {
A default (non-aliased) provider configuration for "aws" was already given at
main.tf:1,1-15. If multiple configurations are required, set the "alias"
argument for alternative configurations.
Error: Unsuitable value type
on main.tf line 8, in provider "aws":
8: alias = "aws-assume-${counter.index}"
Unsuitable value: value must be known
Error: Variables not allowed
on main.tf line 8, in provider "aws":
8: alias = "aws-assume-${counter.index}"
Variables may not be used here.
Error: Invalid provider configuration reference
on main.tf line 33, in module "my-lambda":
33: aws = "aws.aws-assume-${counter.index}"
A provider configuration reference must not be given in quotes.
Hence I am a bit lost...
How to deploy a module with a list of role ids (one module for each account) ?
Provider configurations in Terraform are not dynamically-constructable (that is, to decide which to create based on a value) because Terraform needs to associate providers with resources very early in the lifecycle, during graph construction and before expression evaluation is possible.
Instead, we can refactor the problem so that each module takes a fixed number of AWS providers (most often one, but in some cases two if the module's purpose is e.g. to set up peering between two regions or two accounts) and then instantiate the module multiple times in the root:
provider "aws" {
alias = "eu-west-1_5686435678"
region = "eu-west-1"
assume_role {
role_arn = "arn:aws:iam::acct5686435678:role/admin"
session_name = "whatever_session_name"
external_id = "EXTERNAL_ID"
}
}
provider "aws" {
alias = "eu-west-1_9889865446"
region = "eu-west-1"
assume_role {
role_arn = "arn:aws:iam::acct9889865446:role/admin"
session_name = "whatever_session_name"
external_id = "EXTERNAL_ID"
}
}
module "acct5686435678" {
source = "./modules/aws-account"
providers = {
aws = aws.eu-west-1_5686435678
}
}
module "acct9889865446" {
source = "./modules/aws-account"
providers = {
aws = aws.eu-west-1_9889865446
}
}
module "peering_5686435678_9889865446" {
source = "./modules/aws-account-peering"
providers = {
aws.from = aws.eu-west-1_5686435678
aws.to = aws.eu-west-1_9889865446
}
}
Instantiating the same module multiple times is a common technique for situations where the same infrastructure must be created over multiple AWS accounts or over multiple AWS regions.
With that said, if the multiple AWS accounts are representing separate environments rather than separate components within an environment, it's often preferable to use a separate root configuration per environment while still sharing modules, so that updates to each environment are entirely separated, each environment has its own state, etc.

Replicate infrastructure using Terraform module throws error for same name IAM policy

I have created basic infrastructure as below and I'm trying to see if modules works for me to replicate infrastructure on AWS using Terraform.
variable "access_key" {}
variable "secret_key" {}
provider "aws" {
access_key = "${var.access_key}"
secret_key = "${var.secret_key}"
alias = "us-east-1"
region = "us-east-1"
}
variable "company" {}
module "test1" {
source = "./modules"
}
module "test2" {
source = "./modules"
}
And my module is as follows:
resource "aws_iam_policy" "api_dbaccess_policy" {
name = "lambda_dbaccess_policy"
policy = "${file("${path.module}/api-dynamodb-policy.json")}"
}
But somehow when I use same module in my main.tf it is giving me an error for same named resource policy. How should I handle such a scenario?
I want to use same main.tf for prod/stage/dev environment. How do I achieve it?
My actual module looks like the code in this question.
How do I make use of modules and be able to name module resources dynamically? e.g. stage_iam_policy / prod_iam_policy etc. Is this the right approach?
You're naming the IAM policy the same regardless of where you use the module. With IAM policies they are uniquely identified by their name rather than some random ID (such as EC2 instances which are identified as i-...) so you can't have 2 IAM policies with the same name in the same AWS account.
Instead you must add some extra uniqueness to the name such as by using a parameter to the module appended to the name with something like this:
module "test1" {
source = "./modules"
enviroment = "foo"
}
module "test1" {
source = "./modules"
enviroment = "bar"
}
and in your module you'd have the following:
variable "enviroment" {}
resource "aws_iam_policy" "api_dbaccess_policy" {
name = "lambda_dbaccess_policy_${var.enviroment}"
policy = "${file("${path.module}/api-dynamodb-policy.json")}"
}
Alternatively if you don't have some useful thing you can use such as name or environment etc then you could just straight up use some randomness:
resource "random_pet" "random" {}
resource "aws_iam_policy" "api_dbaccess_policy" {
name = "lambda_dbaccess_policy_${random_pet.random.id}"
policy = "${file("${path.module}/api-dynamodb-policy.json")}"
}