Terraform import : ignore specific resource from public module - amazon-web-services

I trying to import the state of a (private ) s3 bucket which was created via the console. Im using the public s3 module. I was able to create a module block and import the state of the bucket. However terraform plan also tries to create a aws_s3_bucket_public_access_block . How do I ignore or stop terraform from creating that specific resource from the module ?
main.tf
locals {
region = "dev"
}
module "s3_bucket" {
source = "terraform-aws-modules/s3-bucket/aws"
bucket = "my-${region}-bucket"
acl = "private"
block_public_acls = true
block_public_policy = true
lifecycle_rule = [
{
id = "weekly_expiration_rule"
enabled = true
expiration = {
days = 7
}
}
]
}
Import command for bucket - terraform import module.s3_bucket.aws_s3_bucket.this my-dev-bucket
Meanwhile when I try importing the public access block resource I run into the error ` Error: Cannot import non-existent remote object, even when I have the settings configured on the bucket.

Looking into the source code more carefully , specifically this section
resource "aws_s3_bucket_public_access_block" "this" {
count = var.create_bucket && var.attach_public_policy ? 1 : 0
setting attach_public_policy to false got me what I needed

You should run terraform plan to see the real output and read the source code in github (resource "aws_s3_bucket" "this"). You can see count at line 6.
# module.s3_bucket.aws_s3_bucket.this[0] will be created
...
# module.s3_bucket.aws_s3_bucket_public_access_block.this[0] will be created
...
You can import with these commands:
t import module.s3_bucket.aws_s3_bucket.this[0] my-test-bucket-823567823576023
t import module.s3_bucket.aws_s3_bucket_public_access_block.this[0] my-test-bucket-823567823576023
My test main.tf, after I import it, t plan show 0 to add
terraform {
required_version = ">= 0.13.1"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.69"
}
random = {
source = "hashicorp/random"
version = ">= 2.0"
}
}
}
provider "aws" {
region = "ap-southeast-1"
}
module "s3_bucket" {
source = "terraform-aws-modules/s3-bucket/aws"
bucket = "my-test-bucket-823567823576023"
acl = "private"
}

Related

how to import multiple s3 bucket resources to single terraform resource name

I am trying to import existing s3 buckets to my terraform code. i have a lot of buckets in the s3 so I want to collect under a single resource name. For example, let's consider 3 baskets running on s3, 2 of them are created with terraform but 1 of them is not created with terraform.
terraformed-bucket
terraformed-bucket-2
nonterraformed-bucket
I have one resource name for these two buckets. I want to import nonterraformed-bucket to existing resource name that used for terraformed-buckets when migrating to terraform code. but i cant :/
resource "aws_s3_bucket" "tfer--buckets" {
count = "${length(var.bucket_names)}"
bucket = "${element(var.bucket_names, count.index)}"
# count = length(local.bucket_names)
# bucket = local.bucket_names[count.index]
force_destroy = "false"
grant {
id = "674f4d195ff567a2eeb7ee328c84410b02484f646c5f1f595f83ecaf5cfbf"
permissions = ["FULL_CONTROL"]
type = "CanonicalUser"
}
object_lock_enabled = "false"
request_payer = "BucketOwner"
server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
bucket_key_enabled = "true"
}
}
versioning {
enabled = "false"
mfa_delete = "false"
}
}
and my variables:
variable "bucket_names" {
type = list
default = ["terraformed-bucket", "terraformed-bucket-2"]
}
these are the states in my terraform code
mek-bash#%: terraform state list
aws_s3_bucket.tfer--buckets[0]
aws_s3_bucket.tfer--buckets[1]
i tried to import nonterraformed-bucket to existing this resource:
resource "aws_s3_bucket" "tfer--buckets" {}
with this command:
terraform import aws_s3_bucket.tfer--buckets nonterraformed-bucket
but still the output of terraform state list is the same. nothing changed:
mek-bash#%: terraform import aws_s3_bucket.tfer--buckets nonterraformed-bucket
Import successful!
The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.
mek-bash#%: terraform state list
aws_s3_bucket.tfer--buckets[0]
aws_s3_bucket.tfer--buckets[1]
I don't want to use separate resources for each bucket. So I want to import each outside bucket with the same name as the others. So I want to include it as [2] in the same resource name. justlike:
mek-bash#%: terraform state list
aws_s3_bucket.tfer--buckets[0]
aws_s3_bucket.tfer--buckets[1]
aws_s3_bucket.tfer--buckets[2] (should represent nonterraformed-bucket)
Do you have any suggestions for this? Or is there a way to import non-terraformed resources into a single resource name?
You have to add your your nonterraformed-bucket into bucket_names:
variable "bucket_names" {
type = list
default = ["terraformed-bucket", "terraformed-bucket-2", "nonterraformed-bucket"]
}
and then import it as [2] (third bucket):
terraform import aws_s3_bucket.tfer--buckets[2] nonterraformed-bucket
It worked with:
terraform import 'aws_s3_bucket.tfer--buckets[2]' nonterraformed-bucket
it fixed after quotes 'aws_s3_bucket.tfer--buckets[2]'

Terraform loop through multiple providers(accounts) - invokation through module

i have a use case where need help to use for_each to loop through multiple providers( AWS accounts & regions) and this is a module, the TF will be using hub and spoke model.
below is the TF Pseudo code i would like to achieve.
module.tf
---------
app_accounts = [
{ "account" : "53xxxx08", "app_vpc_id" : "vpc-0fxxxxxfec8", "role" : "xxxxxxx", "profile" : "child1"},
{ "account" : "53xxxx08", "app_vpc_id" : "vpc-0fxxxxxfec8", "role" : "xxxxxxx", "profile" : "child2"}
]
below are the provider and resource files, pleas ignore the variables and output files, as its not relevant here
provider.tf
------------
provider "aws" {
for_each = var.app_accounts
alias = "child"
profile = each.value.role
}
here is the main resouce block where i want to multiple child accounts against single master account, so i want to iterate through the loop
resource "aws_route53_vpc_association_authorization" "master" {
provider = aws.master
vpc_id = vpc_id
zone_id = zone_id
}
resource "aws_route53_zone_association" "child" {
provider = aws.child
vpc_id = vpc_id
zone_id = zone_id
}
any idea on how to achieve this, please? thanks in advance.
The typical way to achieve your goal in Terraform is to define a shared module representing the objects that should be present in a single account and then to call that module once for each account, passing a different provider configuration into each.
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
}
}
}
provider "aws" {
alias = "master"
# ...
}
provider "aws" {
alias = "example1"
profile = "example1"
}
module "example1" {
source = "./modules/account"
account = "53xxxx08"
app_vpc_id = "vpc-0fxxxxxfec8"
providers = {
aws = aws.example1
aws.master = aws.master
}
}
provider "aws" {
alias = "example2"
profile = "example2"
}
module "example2" {
source = "./modules/account"
account = "53xxxx08"
app_vpc_id = "vpc-0fxxxxxfec8"
providers = {
aws = aws.example2
aws.master = aws.master
}
}
The ./modules/account directory would then contain the resource blocks describing what should exist in each individual account. For example:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
configuration_aliases = [ aws, aws.master ]
}
}
}
variable "account" {
type = string
}
variable "app_vpc_id" {
type = string
}
resource "aws_route53_zone" "example" {
# (omitting the provider argument will associate
# with the default provider configuration, which
# is different for each instance of this module)
# ...
}
resource "aws_route53_vpc_association_authorization" "master" {
provider = aws.master
vpc_id = var.app_vpc_id
zone_id = aws_route53_zone.example.id
}
resource "aws_route53_zone_association" "child" {
provider = aws.master
vpc_id = var.app_vpc_id
zone_id = aws_route53_zone.example.id
}
(I'm not sure if you actually intended var.app_vpc_id to be the VPC specified for those zone associations, but my goal here is only to show the general pattern, not to show a fully-working example.)
Using a shared module in this way allows to avoid repeating the definitions for each account separately, and keeps each account-specific setting specified in only one place (either in a provider "aws" block or in a module block).
There is no way to make this more dynamic within the Terraform language itself, but if you expect to be adding and removing accounts regularly and want to make it more systematic then you could use code generation for the root module to mechanically produce the provider and module block for each account, to ensure that they all remain consistent and that you can update them all together in case you need to change the interface of the shared module in a way that will affect all of the calls.

Terraform Provider issue: registry.terraform.io/hashicorp/s3

I current have code that I have been using for quiet sometime that calls a custom S3 module. Today I tried to run the same code and I started getting an error regarding the provider.
╷ │ Error: Failed to query available provider packages │ │ Could not
retrieve the list of available versions for provider hashicorp/s3:
provider registry registry.terraform.io does not have a provider named
│ registry.terraform.io/hashicorp/s3 │ │ All modules should specify
their required_providers so that external consumers will get the
correct providers when using a module. To see which modules │ are
currently depending on hashicorp/s3, run the following command: │
terraform providers
Doing some digging seems that terraform is looking for a module registry.terraform.io/hashicorp/s3, which doesn't exist.
So far, I have tried the following things:
Validated that the S3 Resource code meets the standards of the upgrade Hashicorp did to 4.x this year. Plus I have been using it for a couple of months with no issues.
Delete .terraform directory and rerun terraform init (No success same error)
Delete .terraform directory and .terraform.hcl lock and run terraform init -upgrade (No Success)
I have tried to update my provider's file to try to force an upgrade (no Success)
I tried to change the provider to >= current version to pull the latest version with no success
Reading further, it refers to a caching problem of the terraform modules. I tried to run terraform providers lock and received this error.
Error: Could not retrieve providers for locking │ │ Terraform failed
to fetch the requested providers for darwin_amd64 in order to
calculate their checksums: some providers could not be installed: │ -
registry.terraform.io/hashicorp/s3: provider registry
registry.terraform.io does not have a provider named
registry.terraform.io/hashicorp/s3.
Kind of at my wits with what could be wrong. below is a copy of my version.tf which I changed from providers.tf based on another post I was following:
version.tf
# Configure the AWS Provider
provider "aws" {
region = "us-east-1"
use_fips_endpoint = true
}
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 4.9.0"
}
local = {
source = "hashicorp/local"
version = "~> 2.2.1"
}
}
required_version = ">= 1.2.0" #required terraform version
}
S3 Module
I did not include locals, outputs, or variables unless someone thinks we need to see them. As I said before, the module was running correctly until today. Hopefully, this is all you need for the provider's issue. Let me know if other files are needed.
resource "aws_s3_bucket" "buckets" {
count = length(var.bucket_names)
bucket = lower(replace(replace("${var.bucket_names[count.index]}-s3", " ", "-"), "_", "-"))
force_destroy = var.bucket_destroy
tags = local.all_tags
}
# Set Public Access Block for each bucket
resource "aws_s3_bucket_public_access_block" "bucket_public_access_block" {
count = length(var.bucket_names)
bucket = aws_s3_bucket.buckets[count.index].id
block_public_acls = var.bucket_block_public_acls
ignore_public_acls = var.bucket_ignore_public_acls
block_public_policy = var.bucket_block_public_policy
restrict_public_buckets = var.bucket_restrict_public_buckets
}
resource "aws_s3_bucket_acl" "bucket_acl" {
count = length(var.bucket_names)
bucket = aws_s3_bucket.buckets[count.index].id
acl = var.bucket_acl
}
resource "aws_s3_bucket_versioning" "bucket_versioning" {
count = length(var.bucket_names)
bucket = aws_s3_bucket.buckets[count.index].id
versioning_configuration {
status = "Enabled"
}
}
resource "aws_s3_bucket_lifecycle_configuration" "bucket_lifecycle_rule" {
count = length(var.bucket_names)
bucket = aws_s3_bucket.buckets[count.index].id
rule {
id = "${var.bucket_names[count.index]}-lifecycle-${count.index}"
status = "Enabled"
expiration {
days = var.bucket_backup_expiration_days
}
transition {
days = var.bucket_backup_days
storage_class = "GLACIER"
}
}
}
# AWS KMS Key Server Encryption
resource "aws_s3_bucket_server_side_encryption_configuration" "bucket_encryption" {
count = length(var.bucket_names)
bucket = aws_s3_bucket.buckets[count.index].id
rule {
apply_server_side_encryption_by_default {
kms_master_key_id = aws_kms_key.bucket_key[count.index].arn
sse_algorithm = var.bucket_sse
}
}
}
Looking for any other ideas I can use to fix this issue. thank you!!
Although you haven't included it in your question, I'm guessing that somewhere else in this Terraform module you have a block like this:
resource "s3_bucket" "example" {
}
For backward compatibility with modules written for older versions of Terraform, terraform init has some heuristics to guess what provider was intended whenever it encounters a resource that doesn't belong to one of the providers in the module's required_providers block. By default, a resource "belongs to" a provider by matching the prefix of its resource type name -- s3 in this case -- to the local names chosen in the required_providers block.
Given a resource block like the above, terraform init would notice that required_providers doesn't have an entry s3 = { ... } and so will guess that this is an older module trying to use a hypothetical legacy official provider called "s3" (which would now be called hashicorp/s3, because official providers always belong to the hashicorp/ namespace).
The correct name for this resource type is aws_s3_bucket, and so it's important to include the aws_ prefix when you declare a resource of this type:
resource "aws_s3_bucket" "example" {
}
This resource is now by default associated with the provider local name "aws", which does match one of the entries in your required_providers block and so terraform init will see that you intend to use hashicorp/aws to handle this resource.
My colleague and I finally found the problem. Turns out that we had a data call to the S3 bucket. Nothing was wrong with the module but the place I was calling the module had a local.tf action where I was calling s3 in a legacy format see the change below:
WAS
data "s3_bucket" "MyResource" {}
TO
data "aws_s3_bucket" "MyResource" {}
Appreciate the responses from everyone. Resource was the root of the problem but forgot that data is also a resource to check.

Using Terraform Provider in aws module

I am going through the terraform documentation, and it seems unclear to me. I'm quite new to Terraform so no doubt i'm misunderstanding something here:
https://developer.hashicorp.com/terraform/language/modules/develop/providers
Problem:
My terraform pipeline is returning the following warning:
│
│ on waf-cdn.tf line 9, in module "waf_cdn":
│ 9: aws = aws.useastone
│
│ Module module.waf_cdn does not declare a provider named aws.
│ If you wish to specify a provider configuration for the module, add an entry for aws in the required_providers block within the module.
My root module is calling a child waf module. I understand that i need to configure my provider within my root module. There are 2 files within my root module:
...terraform.tf...
terraform {
backend "s3" {}
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 4.33.0"
}
random = {
source = "hashicorp/random"
version = "3.1.0"
}
local = {
source = "hashicorp/local"
version = "2.1.0"
}
kubernetes = {
source = "hashicorp/kubernetes"
version = ">= 2.0.1"
}
}
}
...and providers.tf...
provider "aws" {
region = var.region
assume_role {
role_arn = "arn:aws:iam::${var.account_id}:role/${local.role_name}"
}
}
provider "aws" {
region = "us-east-1"
alias = "useastone"
assume_role {
role_arn = "arn:aws:iam::${var.account_id}:role/${local.role_name}"
}
}
provider "aws" {
region = var.region
alias = "master"
assume_role {
role_arn = replace(
"arn:aws:iam::${var.master_account_id}:role/${local.role_name}",
local.app_region,
"master"
)
}
}
When calling the child module, the SCOPE attribute of the waf needs to specify the region as us-east-1 for CLOUDFRONT as it is a global service in AWS. Therefore, i need to pass the useastone provider when calling the child waf module as seen below:
module "waf_cdn" {
source = "../modules/qa-aws-waf-common"
name = "${local.waf_prefix}-cdn"
logging_arn = aws_kinesis_firehose_delivery_stream.log_stream_cdn.arn
scope = "CLOUDFRONT"
tags = merge(module.tags.tags, { name = "${local.name_prefix}-qa-waf-cdn" })
providers = {
aws = aws.useastone
}
}
With this code i'm getting the error show above.
I'm banging my head against the documentation here so any help guys would be really appreciated.
Here's hoping, thanks!
As per the documentation you linked, here is the passage you are interested in [1]:
Additional provider configurations (those with the alias argument set) are never inherited automatically by child modules, and so must always be passed explicitly using the providers map.
Since that is the case, you need to define the provider(s) on the module level as well:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 4.33.0"
configuration_aliases = [ aws.useastone ]
}
}
}
That would probably be an additional providers.tf file in ../modules/qa-aws-waf-common.
[1] https://developer.hashicorp.com/terraform/language/modules/develop/providers#passing-providers-explicitly

How to get default GCP project and region with Terraform?

For a standard tf boilerplate:
provider "google" {}
How do I get what is the provider's default project and region? Something analogous to aws_region in AWS (like in this question), but for Google Compute Engine (GCE/GCP).
In some cases these are specified externally in the environment variables:
export GOOGLE_PROJECT=myproject
export GOOGLE_REGION=europe-west2
terraform apply
Less often they are overridden in hcl code:
provider "google" {
project = "myproject"
region = "europe-west2"
}
This fails with A managed resource "provider" "google" has not been declared in the root module.:
output "region" {
value = provider.google.region
}
Basic
Use the google_client_config data source:
data "google_client_config" "this" {}
output "region" {
value = data.google_client_config.this.region
}
output "project" {
value = data.google_client_config.this.project
}
Mutliple providers
This can be used even with multiple providers:
provider "google" {
region = "europe-west2"
}
provider "google" {
alias = "another" // alias marks this as an alternate provider
region = "us-east1"
}
data "google_client_config" "this" {
provider = google
}
data "google_client_config" "that" {
provider = google.another
}
output "regions" {
value = [data.google_client_config.this.region, data.google_client_config.that.region]
}
Output:
$ terraform init
$ terraform apply --auto-approve
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Outputs:
regions = [
"europe-west2",
"us-east1",
]