how to import multiple s3 bucket resources to single terraform resource name - amazon-web-services

I am trying to import existing s3 buckets to my terraform code. i have a lot of buckets in the s3 so I want to collect under a single resource name. For example, let's consider 3 baskets running on s3, 2 of them are created with terraform but 1 of them is not created with terraform.
terraformed-bucket
terraformed-bucket-2
nonterraformed-bucket
I have one resource name for these two buckets. I want to import nonterraformed-bucket to existing resource name that used for terraformed-buckets when migrating to terraform code. but i cant :/
resource "aws_s3_bucket" "tfer--buckets" {
count = "${length(var.bucket_names)}"
bucket = "${element(var.bucket_names, count.index)}"
# count = length(local.bucket_names)
# bucket = local.bucket_names[count.index]
force_destroy = "false"
grant {
id = "674f4d195ff567a2eeb7ee328c84410b02484f646c5f1f595f83ecaf5cfbf"
permissions = ["FULL_CONTROL"]
type = "CanonicalUser"
}
object_lock_enabled = "false"
request_payer = "BucketOwner"
server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
bucket_key_enabled = "true"
}
}
versioning {
enabled = "false"
mfa_delete = "false"
}
}
and my variables:
variable "bucket_names" {
type = list
default = ["terraformed-bucket", "terraformed-bucket-2"]
}
these are the states in my terraform code
mek-bash#%: terraform state list
aws_s3_bucket.tfer--buckets[0]
aws_s3_bucket.tfer--buckets[1]
i tried to import nonterraformed-bucket to existing this resource:
resource "aws_s3_bucket" "tfer--buckets" {}
with this command:
terraform import aws_s3_bucket.tfer--buckets nonterraformed-bucket
but still the output of terraform state list is the same. nothing changed:
mek-bash#%: terraform import aws_s3_bucket.tfer--buckets nonterraformed-bucket
Import successful!
The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.
mek-bash#%: terraform state list
aws_s3_bucket.tfer--buckets[0]
aws_s3_bucket.tfer--buckets[1]
I don't want to use separate resources for each bucket. So I want to import each outside bucket with the same name as the others. So I want to include it as [2] in the same resource name. justlike:
mek-bash#%: terraform state list
aws_s3_bucket.tfer--buckets[0]
aws_s3_bucket.tfer--buckets[1]
aws_s3_bucket.tfer--buckets[2] (should represent nonterraformed-bucket)
Do you have any suggestions for this? Or is there a way to import non-terraformed resources into a single resource name?

You have to add your your nonterraformed-bucket into bucket_names:
variable "bucket_names" {
type = list
default = ["terraformed-bucket", "terraformed-bucket-2", "nonterraformed-bucket"]
}
and then import it as [2] (third bucket):
terraform import aws_s3_bucket.tfer--buckets[2] nonterraformed-bucket

It worked with:
terraform import 'aws_s3_bucket.tfer--buckets[2]' nonterraformed-bucket
it fixed after quotes 'aws_s3_bucket.tfer--buckets[2]'

Related

Terraform import : ignore specific resource from public module

I trying to import the state of a (private ) s3 bucket which was created via the console. Im using the public s3 module. I was able to create a module block and import the state of the bucket. However terraform plan also tries to create a aws_s3_bucket_public_access_block . How do I ignore or stop terraform from creating that specific resource from the module ?
main.tf
locals {
region = "dev"
}
module "s3_bucket" {
source = "terraform-aws-modules/s3-bucket/aws"
bucket = "my-${region}-bucket"
acl = "private"
block_public_acls = true
block_public_policy = true
lifecycle_rule = [
{
id = "weekly_expiration_rule"
enabled = true
expiration = {
days = 7
}
}
]
}
Import command for bucket - terraform import module.s3_bucket.aws_s3_bucket.this my-dev-bucket
Meanwhile when I try importing the public access block resource I run into the error ` Error: Cannot import non-existent remote object, even when I have the settings configured on the bucket.
Looking into the source code more carefully , specifically this section
resource "aws_s3_bucket_public_access_block" "this" {
count = var.create_bucket && var.attach_public_policy ? 1 : 0
setting attach_public_policy to false got me what I needed
You should run terraform plan to see the real output and read the source code in github (resource "aws_s3_bucket" "this"). You can see count at line 6.
# module.s3_bucket.aws_s3_bucket.this[0] will be created
...
# module.s3_bucket.aws_s3_bucket_public_access_block.this[0] will be created
...
You can import with these commands:
t import module.s3_bucket.aws_s3_bucket.this[0] my-test-bucket-823567823576023
t import module.s3_bucket.aws_s3_bucket_public_access_block.this[0] my-test-bucket-823567823576023
My test main.tf, after I import it, t plan show 0 to add
terraform {
required_version = ">= 0.13.1"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.69"
}
random = {
source = "hashicorp/random"
version = ">= 2.0"
}
}
}
provider "aws" {
region = "ap-southeast-1"
}
module "s3_bucket" {
source = "terraform-aws-modules/s3-bucket/aws"
bucket = "my-test-bucket-823567823576023"
acl = "private"
}

How do I apply a lifecycle rule to an EXISTING s3 bucket in Terraform?

New to Terraform. Trying to apply a lifecycle rule to an existing s3 bucket declared as a datasource, but I guess I can't do that with a datasource - throws an error. Here's the gist of what I'm trying to achieve:
data "aws_s3_bucket" "test-bucket" {
bucket = "bucket_name"
lifecycle_rule {
id = "Expiration Rule"
enabled = true
prefix = "reports/"
expiration {
days = 30
}
}
}
...and if this were a resource, not a datasource, then it would work. How can I apply a lifecycle rule to an s3 bucket declared as a data source? Google Fu has yielded little in the way of results. Thanks!
The best way to solve this is to import your bucket to terraform state instead of using it as data.
For that try to put this on your terraform code:
resource "aws_s3_bucket" "test-bucket" {
bucket = "bucket_name"
lifecycle_rule {
id = "Expiration Rule"
enabled = true
prefix = "reports/"
expiration {
days = 30
}
}
}
And then run on the terminal:
terraform import aws_s3_bucket.test-bucket bucket_name
This will import the bucket to your state and now you can make the changes or add new things to your bucket using terraform.
Last step just run terraform apply and the lifecycle rule will be added.

Terraform: how to import AWS cross-account resource?

How do I import an existing AWS resource into Terraform state, where that resource exists within a different account?
terraform import module.mymodule.aws_iam_policy.policy arn:aws:iam::123456789012:policy/mypolicy
gives the following error:
Error: Cannot import non-existent remote object
While attempting to import an existing object to aws_iam_policy.policy, the
provider detected that no object exists with the given id. Only pre-existing
objects can be imported; check that the id is correct and that it is
associated with the provider's configured region or endpoint, or use
"terraform apply" to create a new remote object for this resource.
The resource was created in one account using a different provisioner defined within a module called mymodule:
module "mymodule" {
// ... define variables for the module
}
// within the module
provider "aws" {
alias = "cross-account"
region = "eu-west-2"
assume_role {
role_arn = var.provider_role_arn
}
}
resource "aws_iam_policy" "policy" {
provider = "aws.cross-account"
name = var.policy-name
path = var.policy-path
description = var.policy-description
policy = var.policy-document
}
How do I import cross-account resources?
Update: using the -provider flag, I get a different error:
Error: Provider configuration not present
To work with module.mymodule.aws_iam_policy.policy (import
id "arn:aws:iam::123456789012:policy/somepolicytoimport") its original provider
configuration at provider.aws.cross-account is required, but it has been
removed. This occurs when a provider configuration is removed while objects
created by that provider still exist in the state. Re-add the provider
configuration to destroy
module.mymodule.aws_iam_policy.policy (import id
"arn:aws:iam::123456789012:policy/somepolicytoimport"), after which you can remove
the provider configuration again.
I think you have to assume the role of the second account as follows.
provider "aws" {
assume_role {
role_arn = "arn:aws:iam::ACCOUNT_ID:role/ROLE_NAME"
session_name = "SESSION_NAME"
external_id = "EXTERNAL_ID"
}
}
[1] : https://www.terraform.io/docs/providers/aws/index.html
I've got the same error while trying to import AWS acm certificate.
As the first step, before importing the resource, you need to create its configuration in the root module (or other relevant module):
resource "aws_acm_certificate" "cert" {
# (resource arguments)
}
Or you'll got the following error:
Error: resource address "aws_acm_certificate.cert" does not exist in
the configuration.
Then you can import the resource by providing its relevant arn:
$ terraform import aws_acm_certificate.cert <certificate-arn>
Like #ydaetskcoR mentioned in the comments - you don't need to assume the role of the second account if you're using v0.12.10+.
But Terraform do need Access credentials to the second account - so please make sure you provide the relevant account's credentials (and not the source account credentials) or you'll be stuck with the Error: Cannot import non-existent remote object for a few hours like me (:
You can use multiple provider configurations if you have credentials for the another account.
# This is used by default
provider "aws" {
region = "us-east-1"
access_key = "my-access-key"
secret_key = "my-secret-key"
}
provider "aws" {
alias = "another_account"
region = "us-east-1"
access_key = "another-account-access-key"
secret_key = "another-account-secret-key"
}
# To use the other configuration
resource "aws_instance" "foo" {
provider = aws.another_account
# ...
}
Here the documentation: https://developer.hashicorp.com/terraform/language/providers/configuration#alias-multiple-provider-configurations

Renaming s3 bucket in Terraform (but not S3) causes create then destroy?

I want to refactor my Terraform scripts a bit.
Before:
resource "aws_s3_bucket" "abc" {
bucket = "my-bucket"
acl = "private"
region = "${var.aws_region}"
tags = {
Name = "My bucket"
}
versioning {
enabled = true
mfa_delete = false
}
}
After:
resource "aws_s3_bucket" "def" {
bucket = "my-bucket"
acl = "private"
region = "${var.aws_region}"
tags = {
Name = "My bucket"
}
versioning {
enabled = true
mfa_delete = false
}
}
As you can see, only the name in Terraform has changed (abc -> def).
However, this causes a create / destroy of the bucket in terraform plan.
I expected Terraform recognize the buckets as the same (they have the same attributes, including bucket).
Questions:
Why is this?
Is there a way to refactor Terraform scripts without destroying infrastructure?
You can use terraform state mv, to reflect this change in the state.
In you case, this would be
terraform state mv aws_s3_bucket.abc aws_s3_bucket.def
From my own experience, this works well and I recommend doing it instead of working with bad names.
Terraform does not recognize such changes, no :-)

terraform count dependent on data from target environment

I'm getting the following error when trying to initially plan or apply a resource that is using the data values from the AWS environment to a count.
$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
------------------------------------------------------------------------
Error: Invalid count argument
on main.tf line 24, in resource "aws_efs_mount_target" "target":
24: count = length(data.aws_subnet_ids.subnets.ids)
The "count" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
To work around this, use the -target argument to first apply only the
resources that the count depends on.
$ terraform --version
Terraform v0.12.9
+ provider.aws v2.30.0
I tried using the target option but doesn't seem to work on data type.
$ terraform apply -target aws_subnet_ids.subnets
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
The only solution I found that works is:
remove the resource
apply the project
add the resource back
apply again
Here is a terraform config I created for testing.
provider "aws" {
version = "~> 2.0"
}
locals {
project_id = "it_broke_like_3_collar_watch"
}
terraform {
required_version = ">= 0.12"
}
resource aws_default_vpc default {
}
data aws_subnet_ids subnets {
vpc_id = aws_default_vpc.default.id
}
resource aws_efs_file_system efs {
creation_token = local.project_id
encrypted = true
}
resource aws_efs_mount_target target {
depends_on = [ aws_efs_file_system.efs ]
count = length(data.aws_subnet_ids.subnets.ids)
file_system_id = aws_efs_file_system.efs.id
subnet_id = tolist(data.aws_subnet_ids.subnets.ids)[count.index]
}
Finally figured out the answer after researching the answer by Dude0001.
Short Answer. Use the aws_vpc data source with the default argument instead of the aws_default_vpc resource. Here is the working sample with comments on the changes.
locals {
project_id = "it_broke_like_3_collar_watch"
}
terraform {
required_version = ">= 0.12"
}
// Delete this --> resource aws_default_vpc default {}
// Add this
data aws_vpc default {
default = true
}
data "aws_subnet_ids" "subnets" {
// Update this from aws_default_vpc.default.id
vpc_id = "${data.aws_vpc.default.id}"
}
resource aws_efs_file_system efs {
creation_token = local.project_id
encrypted = true
}
resource aws_efs_mount_target target {
depends_on = [ aws_efs_file_system.efs ]
count = length(data.aws_subnet_ids.subnets.ids)
file_system_id = aws_efs_file_system.efs.id
subnet_id = tolist(data.aws_subnet_ids.subnets.ids)[count.index]
}
What I couldn't figure out was why my work around of removing aws_efs_mount_target on the first apply worked. It's because after the first apply the aws_default_vpc was loaded into the state file.
So an alternate solution without making change to the original tf file would be to use the target option on the first apply:
$ terraform apply --target aws_default_vpc.default
However, I don't like this as it requires a special case on first deployment which is pretty unique for the terraform deployments I've worked with.
The aws_default_vpc isn't a resource TF can create or destroy. It is the default VPC for your account in each region that AWS creates automatically for you that is protected from being destroyed. You can only (and need to) adopt it in to management and your TF state. This will allow you to begin managing and to inspect when you run plan or apply. Otherwise, TF doesn't know what the resource is or what state it is in, and it cannot create a new one for you as it s a special type of protected resource as described above.
With that said, go get the default VPC id from the correct region you are deploying in your account. Then import it into your TF state. It should then be able to inspect and count the number of subnets.
For example
terraform import aws_default_vpc.default vpc-xxxxxx
https://www.terraform.io/docs/providers/aws/r/default_vpc.html
Using the data element for this looks a little odd to me as well. Can you change your TF script to get the count directly through the aws_default_vpc resource?