How to move existing Terraform resources from single items to set? - google-cloud-platform

I have an SQL db module with single databases like this:
resource "google_sql_database" "projects" {
name = "projects"
instance = google_sql_database_instance.database.name
}
resource "google_sql_database" "markdown" {
name = "markdown"
instance = google_sql_database_instance.database.name
}
I'd like to switch to set of variables instead:
variable "databases" {
type = list(string)
default = ["projects", "markdown"]
}
resource "google_sql_database" "database" {
for_each = toset(var.databases)
name = each.key
instance = google_sql_database_instance.database.name
}
And when I do terraform apply the CLI wants to recreate everything:
# module.sql.google_sql_database.database["markdown"] will be created
+ resource "google_sql_database" "database" {
...
...
# module.sql.google_sql_database.markdown will be destroyed
- resource "google_sql_database" "markdown" {
...
...
How to avoid that and mirror the existing resources to new config?

You either need to run terraform state mv command for each resource, or add moved blocks to your Terraform code.

Related

Terraform loop through multiple providers(accounts) - invokation through module

i have a use case where need help to use for_each to loop through multiple providers( AWS accounts & regions) and this is a module, the TF will be using hub and spoke model.
below is the TF Pseudo code i would like to achieve.
module.tf
---------
app_accounts = [
{ "account" : "53xxxx08", "app_vpc_id" : "vpc-0fxxxxxfec8", "role" : "xxxxxxx", "profile" : "child1"},
{ "account" : "53xxxx08", "app_vpc_id" : "vpc-0fxxxxxfec8", "role" : "xxxxxxx", "profile" : "child2"}
]
below are the provider and resource files, pleas ignore the variables and output files, as its not relevant here
provider.tf
------------
provider "aws" {
for_each = var.app_accounts
alias = "child"
profile = each.value.role
}
here is the main resouce block where i want to multiple child accounts against single master account, so i want to iterate through the loop
resource "aws_route53_vpc_association_authorization" "master" {
provider = aws.master
vpc_id = vpc_id
zone_id = zone_id
}
resource "aws_route53_zone_association" "child" {
provider = aws.child
vpc_id = vpc_id
zone_id = zone_id
}
any idea on how to achieve this, please? thanks in advance.
The typical way to achieve your goal in Terraform is to define a shared module representing the objects that should be present in a single account and then to call that module once for each account, passing a different provider configuration into each.
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
}
}
}
provider "aws" {
alias = "master"
# ...
}
provider "aws" {
alias = "example1"
profile = "example1"
}
module "example1" {
source = "./modules/account"
account = "53xxxx08"
app_vpc_id = "vpc-0fxxxxxfec8"
providers = {
aws = aws.example1
aws.master = aws.master
}
}
provider "aws" {
alias = "example2"
profile = "example2"
}
module "example2" {
source = "./modules/account"
account = "53xxxx08"
app_vpc_id = "vpc-0fxxxxxfec8"
providers = {
aws = aws.example2
aws.master = aws.master
}
}
The ./modules/account directory would then contain the resource blocks describing what should exist in each individual account. For example:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
configuration_aliases = [ aws, aws.master ]
}
}
}
variable "account" {
type = string
}
variable "app_vpc_id" {
type = string
}
resource "aws_route53_zone" "example" {
# (omitting the provider argument will associate
# with the default provider configuration, which
# is different for each instance of this module)
# ...
}
resource "aws_route53_vpc_association_authorization" "master" {
provider = aws.master
vpc_id = var.app_vpc_id
zone_id = aws_route53_zone.example.id
}
resource "aws_route53_zone_association" "child" {
provider = aws.master
vpc_id = var.app_vpc_id
zone_id = aws_route53_zone.example.id
}
(I'm not sure if you actually intended var.app_vpc_id to be the VPC specified for those zone associations, but my goal here is only to show the general pattern, not to show a fully-working example.)
Using a shared module in this way allows to avoid repeating the definitions for each account separately, and keeps each account-specific setting specified in only one place (either in a provider "aws" block or in a module block).
There is no way to make this more dynamic within the Terraform language itself, but if you expect to be adding and removing accounts regularly and want to make it more systematic then you could use code generation for the root module to mechanically produce the provider and module block for each account, to ensure that they all remain consistent and that you can update them all together in case you need to change the interface of the shared module in a way that will affect all of the calls.

Dinamically add resources in Terraform

I set up a jenkins pipeline that launches terraform to create a new EC2 instance in our VPC and register it to our private hosted zone on R53 (which is created at the same time) at every run.
I also managed to save the state into S3 so it doesn't fail with the hosted zone being re-created.
the main issue I have is that at every run terraform keeps replacing the previous instance with the new one and not adding it to the pool of instances.
How can avoid this?
here's a snippet of my code
terraform {
backend "s3" {
bucket = "<redacted>"
key = "<redacted>/terraform.tfstate"
region = "eu-west-1"
}
}
provider "aws" {
region = "${var.region}"
}
data "aws_ami" "image" {
# limit search criteria for performance
most_recent = "${var.ami_filter_most_recent}"
name_regex = "${var.ami_filter_name_regex}"
owners = ["${var.ami_filter_name_owners}"]
# filter on tag purpose
filter {
name = "tag:purpose"
values = ["${var.ami_filter_purpose}"]
}
# filter on tag os
filter {
name = "tag:os"
values = ["${var.ami_filter_os}"]
}
}
resource "aws_instance" "server" {
# use extracted ami from image data source
ami = data.aws_ami.image.id
availability_zone = data.aws_subnet.most_available.availability_zone
subnet_id = data.aws_subnet.most_available.id
instance_type = "${var.instance_type}"
vpc_security_group_ids = ["${var.security_group}"]
user_data = "${var.user_data}"
iam_instance_profile = "${var.iam_instance_profile}"
root_block_device {
volume_size = "${var.root_disk_size}"
}
ebs_block_device {
device_name = "${var.extra_disk_device_name}"
volume_size = "${var.extra_disk_size}"
}
tags = {
Name = "${local.available_name}"
}
}
resource "aws_route53_zone" "private" {
name = var.hosted_zone_name
vpc {
vpc_id = var.vpc_id
}
}
resource "aws_route53_record" "record" {
zone_id = aws_route53_zone.private.zone_id
name = "${local.available_name}.${var.hosted_zone_name}"
type = "A"
ttl = "300"
records = [aws_instance.server.private_ip]
depends_on = [
aws_route53_zone.private
]
}
the outcome is that my previously created instance is destroyed and a new one is created. what I want is to keep adding instances with this code.
thank you
Your code creates only one instance aws_instance.server, and any change to its properties will modify that one instance only as your backend is in S3, thus it acts as a global state for each pipeline. The same goes for aws_route53_record.record and anything else in your script.
If you want different pipelines to reuse the same exact script, you should either use different workspaces, or create different TF states for each pipeline. The other alternative is to redefine your TF script to take a map of instances as an input variable and use for_each to create different instances.
If those instances should be same, you should manage their count using using aws_autoscaling_group and desired capacity.

Terraform: Create new GCE instance using same terraform code

I have created a new gcp vm instance successfully using terraform modules. The contents in my module folder are as follows
#main.tf
# google_compute_instance.default:
resource "google_compute_instance" "default" {
machine_type = "${var.machinetype}"
name = "${var.name}"
project = "demo"
tags = []
zone = "${var.zone}"
boot_disk {
initialize_params {
image = "https://www.googleapis.com/compute/v1/projects/centos-cloud/global/images/centos-7-v20210701"
size = 20
type = "pd-balanced"
}
}
network_interface {
network = "default"
subnetwork = "default"
subnetwork_project = "gcp-infrastructure-319318"
}
service_account {
email = "971558418058-compute#developer.gserviceaccount.com"
scopes = [
"https://www.googleapis.com/auth/devstorage.read_only",
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring.write",
"https://www.googleapis.com/auth/service.management.readonly",
"https://www.googleapis.com/auth/servicecontrol",
"https://www.googleapis.com/auth/trace.append",
]
}
}
-----------
#variables.tf
variable "zone" {
default="us-east1-b"
}
variable "machinetype" {
default="f1-micro"
}
------------------
#terraform.tfvars
machinetype="g1-small"
zone="europe-west1-b"
My main code block is follows
$ cat providers.tf
provider "google" {
}
$ cat main.tf
module "gce" {
source = "../../../modules/services/gce"
name = "new-vm-tf"
}
With this code I am able create a new vm instance successfully
$ gcloud compute instances list
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
new-vm-tf us-east1-b f1-micro 10.142.0.3 RUNNING
Now I have a requirement to create a new VM instance of machine type e2-standard. how can I handle that scenario?
If I edit my existing main.tf as shown below, it will destroy the existing vm instance to create the new vm instance .
$ cat main.tf
module "gce" {
source = "../../../modules/services/gce"
name = "new-vm-tf1"
}
terraform plan confirms as below
~ name = "new-vm-tf" -> "new-vm-tf1" # forces replacement
Plan: 1 to add, 0 to change, 1 to destroy.
I need pointers to resuse the same code to create a new vm existing without any changes to existing one . Please suggest
I recommend you to deep dive into terraform mechanism and best practices. I have 2 keywords to start: tfstate and variables.
The tfstate is the deployment context. If you change the deployment and the context isn't consistent, Terraform delete what is not consistent and create the missing pieces.
The variables are useful to reuse piece of generic code by customizing the values in entry.

Terraform GCP executes resources in wrong order

I have this main.tf file:
provider "google" {
project = var.projNumber
region = var.regName
zone = var.zoneName
}
resource "google_storage_bucket" "bucket_for_python_application" {
name = "python_bucket_exam"
location = var.regName
force_destroy = true
}
resource "google_storage_bucket_object" "file-hello-py" {
name = "src/hello.py"
source = "app-files/src/hello.py"
bucket = "python_bucket_exam"
}
resource "google_storage_bucket_object" "file-main-py" {
name = "main.py"
source = "app-files/main.py"
bucket = "python_bucket_exam"
}
When executed first time It worked fine, but after terraform destroy and again terraform plan -> terraform apply I've noticed that terraform tries to create object before actually creating a bucket:
Ofc it cant't create object inside something that does'nt exist. Why is that?
You have to create a dependency between your objects and your bucket (see code below). Otherwise, Terraform won't know that it has to create bucket first, and then objects. This is related to how Terraform stores the resources in a directed graph.
resource "google_storage_bucket_object" "file-hello-py" {
name = "src/hello.py"
source = "app-files/src/hello.py"
bucket = google_storage_bucket.bucket_for_python_application.name
}
resource "google_storage_bucket_object" "file-main-py" {
name = "main.py"
source = "app-files/main.py"
bucket = google_storage_bucket.bucket_for_python_application.name
}
By doing this, you declare an implicit order : bucket, then objects. This is equivalent to using depends_on in your google_storage_bucket_objects, but in that particular case I recommend using a reference to your bucket in your objects, rather than using an explicit depends_on.

Define tags in central section in TerraForm

I'm playing around with Terraform for a bit and I was wondering if this is possible. It's best practice to assign tags to each resource you create on AWS (for example). So, what you do first is come up with a tagging strategy (for example, which business unit, a name of the app, a team responsible for it, ...).
However, in Terraform, this means that you have to repeat each tags-block for each resource. This isn't very convenient and if you want to update 1 of the tag names, you have to update each resource that you created.
For example:
resource "aws_vpc" "vpc" {
cidr_block = "${var.cidr}"
tags {
Name = "${var.name}"
Project = "${var.projectname}"
Environment = "${var.environment}"
}
}
If I want to create a Subnet and EC2 in that VPC with the same tags, I have to repeat that tags-block. If I want to update 1 of the tag names later on, I have to update each resource individually, which is very time consuming and tedious.
Is there a possibility to create a block of tags in a centralized location and refer to that? I was thinking of Modules, but that doesn't seem to fit the definition of a module.
You can also try local values from version 0.10.3. It allows you to assign a symbolic local name to an expression so it can be used multiple times in configuration without repetition.
# Define the common tags for all resources
locals {
common_tags = {
Component = "awesome-app"
Environment = "production"
}
}
# Create a resource that blends the common tags with instance-specific tags.
resource "aws_instance" "server" {
ami = "ami-123456"
instance_type = "t2.micro"
tags = "${merge(
local.common_tags,
map(
"Name", "awesome-app-server",
"Role", "server"
)
)}"
}
Terraform version .12 onwords,
This is the variable
variable "sns_topic_name" {
type = string
default = "VpnTopic"
description = "Name of the sns topic"
}
This is the code
locals {
common_tags = {
Terraform = true
}
}
# Create a Resource
resource "aws_sns_topic" "sns_topic" {
name = var.sns_topic_name
tags = merge(
local.common_tags,
{
"Name" = var.sns_topic_name
}
)
}
Output will be
+ tags = {
+ "Name" = "VpnTopic"
+ "Terraform" = "true"
}
Terraform now natively support it for AWS provider.
Check out here
As of version 3.38.0 of the Terraform AWS Provider, the Terraform Configuration language also enables provider-level tagging. Reference Link
# Terraform 0.12 and later syntax
provider "aws" {
# ... other configuration ...
default_tags {
tags = {
Environment = "Production"
Owner = "Ops"
}
}
}
resource "aws_vpc" "example" {
# ... other configuration ...
# This configuration by default will internally combine tags defined
# within the provider configuration block and those defined here
tags = {
Name = "MyVPC"
}
}
For "aws_vpc.example" resouce below tags will be assigned, which is combination of tags defined under provider and tags defined under aws_vps section:
+ tags = {
+ "Environment" = "Production"
+ "Owner" = "Ops"
+ "Name" = "MyVPC"
}