I'm trying to create an launch configuration, ELB and 2 ASG. I guess one ELB is fine to create 2 ASG (im not sure).
So I have a launch configuration and asg code in one file calling the as module. My question is, can I create 2 ASG using a single terraform file or with file in a single repo?
Also, Please let me know if this is a good configuration.
when I tried to put two different files calling same module I get following error.
Error downloading modules: Error loading modules: module asg: duplicated. module names must be unique
My Terraform code:
auto_scaling.tf
resource "aws_launch_configuration" "launch_config" {
image_id = "${var.ec2ami_id}"
instance_type = "${var.ec2_instance_type}"
security_groups = ["${aws_security_group.*******.id}"]
key_name = "${var.keypair}"
lifecycle {
create_before_destroy = true
}
}
module "asg" {
source = ****
name = "*****"
environment = "***"
service = "****"
product = "**"
team = "****"
owner = "*****"
ami = "${var.ec2_id}"
#instance_profile = "******"
instance_type = "t2.micro"
ebs_optimized = true
key_name = "${var.keypair}"
security_group = ["${aws_security_group.****.id}"]
user_data = "${path.root}/blank_user_data.sh"
load_balancer_names = "${module.elb.elb_name}"
associate_public_ip = false
asg_instances = 2
asg_min_instances = 2
asg_max_instances = 4
root_volume_size = 250
asg_wait_for_capacity_timeout = "5m"
vpc_zone_subnets = "${module.vpc.private_subnets}"
}
###elb.tf###
module "elb" {
source = "*****"
name = "***elb"
subnet_ids = "${element(split(",",
module.vpc.private_subnets), 0)}"
security_groups = "${aws_security_group.****.id}"
s3_access_logs_bucket = "****"
}
I want to create 2 ASGs in one subnet.
You can reuse your asg module - just give both instances different resource names, e.g.:
module "asg1" {
...
}
module "asg2" {
...
}
Related
I have applied the code for tagging AWS ec2 instances in Terraform, when the code runs it only created singe TAG.
How can we add multiple TAGs e.g
It add Auto creation DATE.
It add Auto OS detection (like it is windows or linux)
Please see TAG detail in Screenshot
Gurus, your kind support will be highly appreciated.
I have added the following code for Tagging.
# Block for create EC2 Instance
resource "aws_instance" "ec2" {
count = var.instance_count
ami = "ami-005835d578c62050d"
instance_type = "t2.micro"
vpc_security_group_ids = [var.security_group_id]
subnet_id = var.subnet_id
key_name = var.key
**tags = {
Name = "${var.name}-${count.index + 1}"**
}
}
tags attribute accepts a map of strings and you can also use terraform functions like merge to merge default tags if available in your used case with custom resource-specific tags.
# Block for create EC2 Instance
resource "aws_instance" "ec2" {
count = var.instance_count
ami = "ami-005835d578c62050d"
instance_type = "t2.micro"
vpc_security_group_ids = [var.security_group_id]
subnet_id = var.subnet_id
key_name = var.key
tags = merge(var.default_ec2_tags,
{
Name = "${var.name}-${count.index + 1}"
}
)
}
variable "default_ec2_tags" {
type = map(string)
description = "(optional) default tags for ec2 instances"
default = {
managed_by = "terraform"
environment = "dev"
}
}
Something very specific to terraform-aws-provider and a very handy feature is default_tags which you can configure on the provider level and these tags will be applied to all resources managed by the provider.
Click to view Tutorial from hashicorp on default-tags-in-the-terraform-aws-provider
It's not possible to get the OS type tag natively as mentioned by #Marcin already in the comments.
You can add other tags by simply adding to your Tags, For example:
tags = {
Name = "${var.name}-${count.index + 1}"
CreationDate = timestamp()
OS = "Linux"
}
In terraform, I wish to create 3 servers while I'm having 2 subnets.
Creating 2 servers according to the below code will route both server and subnet ID according to the count - But what if I want 3 servers? I don't mind on which of the subnet the third server will be located.
resource "aws_instance" "consul_server" {
count = 2
ami = "ami-00ddb0e5626798373"
instance_type = t2.micro
subnet_id = var.private_subnet_id[count.index]
vpc_security_group_ids = [aws_security_group.consul_server.id]
tags = {
Name = "consul-server-${count.index + 1}-${var.project_name}"
tag_enviroment= var.tag_enviroment
project_name = var.project_name
consul_server = "true"
role = "consul-server"
}
}
Normally you would use element to wrap-around indexing:
subnet_id = element(var.private_subnet_id, count.index)
I want to create a GCP redis instance on a service project that have a shared subnetwork from a host project shared with it. I don't want the redis instance to be on the top level of the vpc-network. But be part of a subnetwork on the vpc-network.
So instead of authorized_network equal to:
"projects/infra/global/networks/infra".
I want the authorized_network to be equal to:
"projects/infra/regions/europe-west1/subnetworks/service"
Under the vpc-network->Shared-VPC tab I can see my subnetwork "service" shared with the service-project and I can see it belongs to the "infra" vpc-network. But when I try to create the instance in the gui or with terraform I can't select the subnetwork only the vpc-network top level "infra".
terraform code I tried but did't work:
resource "google_redis_instance" "test" {
auth_enabled = true
authorized_network = "projects/infra/regions/europe-west1/subnetworks/service"
connect_mode = "PRIVATE_SERVICE_ACCESS"
name = "test"
project = local.infra_project_id
display_name = "test"
memory_size_gb = 1
redis_version = "REDIS_6_X"
region = "europe-west1"
}
terraform code that works, but on vpc-network not on subnetwork:
resource "google_redis_instance" "test" {
auth_enabled = true
authorized_network = "projects/infra/global/networks/infra"
connect_mode = "PRIVATE_SERVICE_ACCESS"
name = "test"
project = local.infra_project_id
display_name = "test"
memory_size_gb = 1
redis_version = "REDIS_6_X"
region = "europe-west1"
}
First of is this possible?
Second what is needed to get it to work?
I set up a jenkins pipeline that launches terraform to create a new EC2 instance in our VPC and register it to our private hosted zone on R53 (which is created at the same time) at every run.
I also managed to save the state into S3 so it doesn't fail with the hosted zone being re-created.
the main issue I have is that at every run terraform keeps replacing the previous instance with the new one and not adding it to the pool of instances.
How can avoid this?
here's a snippet of my code
terraform {
backend "s3" {
bucket = "<redacted>"
key = "<redacted>/terraform.tfstate"
region = "eu-west-1"
}
}
provider "aws" {
region = "${var.region}"
}
data "aws_ami" "image" {
# limit search criteria for performance
most_recent = "${var.ami_filter_most_recent}"
name_regex = "${var.ami_filter_name_regex}"
owners = ["${var.ami_filter_name_owners}"]
# filter on tag purpose
filter {
name = "tag:purpose"
values = ["${var.ami_filter_purpose}"]
}
# filter on tag os
filter {
name = "tag:os"
values = ["${var.ami_filter_os}"]
}
}
resource "aws_instance" "server" {
# use extracted ami from image data source
ami = data.aws_ami.image.id
availability_zone = data.aws_subnet.most_available.availability_zone
subnet_id = data.aws_subnet.most_available.id
instance_type = "${var.instance_type}"
vpc_security_group_ids = ["${var.security_group}"]
user_data = "${var.user_data}"
iam_instance_profile = "${var.iam_instance_profile}"
root_block_device {
volume_size = "${var.root_disk_size}"
}
ebs_block_device {
device_name = "${var.extra_disk_device_name}"
volume_size = "${var.extra_disk_size}"
}
tags = {
Name = "${local.available_name}"
}
}
resource "aws_route53_zone" "private" {
name = var.hosted_zone_name
vpc {
vpc_id = var.vpc_id
}
}
resource "aws_route53_record" "record" {
zone_id = aws_route53_zone.private.zone_id
name = "${local.available_name}.${var.hosted_zone_name}"
type = "A"
ttl = "300"
records = [aws_instance.server.private_ip]
depends_on = [
aws_route53_zone.private
]
}
the outcome is that my previously created instance is destroyed and a new one is created. what I want is to keep adding instances with this code.
thank you
Your code creates only one instance aws_instance.server, and any change to its properties will modify that one instance only as your backend is in S3, thus it acts as a global state for each pipeline. The same goes for aws_route53_record.record and anything else in your script.
If you want different pipelines to reuse the same exact script, you should either use different workspaces, or create different TF states for each pipeline. The other alternative is to redefine your TF script to take a map of instances as an input variable and use for_each to create different instances.
If those instances should be same, you should manage their count using using aws_autoscaling_group and desired capacity.
Currently we are using the Blue/Green Deployment Model for our Application using Terraform.
And our TF Files have resources for both Blue & Green as seen below -
resource "aws_instance" "green_node" {
count = "${var.node_count * var.keep_green * var.build}"
lifecycle = {
create_before_destroy = true
}
ami = "${var.green_ami_id}"
instance_type = "${lookup(var.instance_type,lower(var.env))}"
security_groups = "${split(",", lookup(var.security_groups, format("%s-%s", lower(var.env),var.region)))}"
subnet_id = "${element(split(",", lookup(var.subnets, format("%s-%s", lower(var.env),var.region))), count.index)}"
iam_instance_profile = "${var.iam_role}"
key_name = "${var.key_name}"
associate_public_ip_address = "false"
tags {
Name = "node-green-${var.env}-${count.index + 1}"
}
user_data = "${data.template_cloudinit_config.green_node.rendered}"
}
resource "aws_instance" "blue_node" {
count = "${var.node_count * var.keep_blue * var.build}"
lifecycle = {
create_before_destroy = true
}
ami = "${var.blue_ami_id}"
instance_type = "${lookup(var.instance_type,lower(var.env))}"
security_groups = "${split(",", lookup(var.security_groups, format("%s-%s", lower(var.env),var.region)))}"
subnet_id = "${element(split(",", lookup(var.subnets, format("%s-%s", lower(var.env),var.region))), count.index)}"
iam_instance_profile = "${var.iam_role}"
key_name = "${var.key_name}"
associate_public_ip_address = "false"
tags {
Name = "node-blue-${var.env}-${count.index + 1}"
}
user_data = "${data.template_cloudinit_config.blue_node.rendered}"
}
My question - Is there a way to update the Green Resources without updating the Blue Resources and vice versa Without Using Targeted Plan. For eg. If we update the Security Groups(var.security_groups) which is a common variable, the update will occur for both Blue and Green and i will have to do a targeted plan(seen below)to avoid Blue Resources from getting updated with the New Security Group's -
terraform plan -out=green.plan -target=<green_resource_name>
This is a good question.
If you need to make the blue/green stack work as your expect and reduce the complexity of the code, You can use terraform modules, and set a variable to control which color you will update.
So the stack shares the module when you need update blue or green resources. Define a variable, such as TF_VAR_stack_color to blue or green
Add ${var.stack_color} in the name of any resources you try to create/update in modules.
module "nodes" {
source = "modules/nodes"
name = "${var.name}-${var.stack_color}-${var.others}"
...
}
So you can deploy the blue resource with below command without impact the running green resources.
TF_VAR_stack_color=blue terraform plan
or
terraform plan -var stack_color=blue
With terraform modules, you needn't write resource aws_instance two times for blue and green nodes.
I will recommend splitting the resources into different state files by terraform init, so they will be the totally separate stacks.