Dinamically add resources in Terraform - amazon-web-services

I set up a jenkins pipeline that launches terraform to create a new EC2 instance in our VPC and register it to our private hosted zone on R53 (which is created at the same time) at every run.
I also managed to save the state into S3 so it doesn't fail with the hosted zone being re-created.
the main issue I have is that at every run terraform keeps replacing the previous instance with the new one and not adding it to the pool of instances.
How can avoid this?
here's a snippet of my code
terraform {
backend "s3" {
bucket = "<redacted>"
key = "<redacted>/terraform.tfstate"
region = "eu-west-1"
}
}
provider "aws" {
region = "${var.region}"
}
data "aws_ami" "image" {
# limit search criteria for performance
most_recent = "${var.ami_filter_most_recent}"
name_regex = "${var.ami_filter_name_regex}"
owners = ["${var.ami_filter_name_owners}"]
# filter on tag purpose
filter {
name = "tag:purpose"
values = ["${var.ami_filter_purpose}"]
}
# filter on tag os
filter {
name = "tag:os"
values = ["${var.ami_filter_os}"]
}
}
resource "aws_instance" "server" {
# use extracted ami from image data source
ami = data.aws_ami.image.id
availability_zone = data.aws_subnet.most_available.availability_zone
subnet_id = data.aws_subnet.most_available.id
instance_type = "${var.instance_type}"
vpc_security_group_ids = ["${var.security_group}"]
user_data = "${var.user_data}"
iam_instance_profile = "${var.iam_instance_profile}"
root_block_device {
volume_size = "${var.root_disk_size}"
}
ebs_block_device {
device_name = "${var.extra_disk_device_name}"
volume_size = "${var.extra_disk_size}"
}
tags = {
Name = "${local.available_name}"
}
}
resource "aws_route53_zone" "private" {
name = var.hosted_zone_name
vpc {
vpc_id = var.vpc_id
}
}
resource "aws_route53_record" "record" {
zone_id = aws_route53_zone.private.zone_id
name = "${local.available_name}.${var.hosted_zone_name}"
type = "A"
ttl = "300"
records = [aws_instance.server.private_ip]
depends_on = [
aws_route53_zone.private
]
}
the outcome is that my previously created instance is destroyed and a new one is created. what I want is to keep adding instances with this code.
thank you

Your code creates only one instance aws_instance.server, and any change to its properties will modify that one instance only as your backend is in S3, thus it acts as a global state for each pipeline. The same goes for aws_route53_record.record and anything else in your script.
If you want different pipelines to reuse the same exact script, you should either use different workspaces, or create different TF states for each pipeline. The other alternative is to redefine your TF script to take a map of instances as an input variable and use for_each to create different instances.
If those instances should be same, you should manage their count using using aws_autoscaling_group and desired capacity.

Related

How to apply different TAGs for AWS EC2 in Terraform

I have applied the code for tagging AWS ec2 instances in Terraform, when the code runs it only created singe TAG.
How can we add multiple TAGs e.g
It add Auto creation DATE.
It add Auto OS detection (like it is windows or linux)
Please see TAG detail in Screenshot
Gurus, your kind support will be highly appreciated.
I have added the following code for Tagging.
# Block for create EC2 Instance
resource "aws_instance" "ec2" {
count = var.instance_count
ami = "ami-005835d578c62050d"
instance_type = "t2.micro"
vpc_security_group_ids = [var.security_group_id]
subnet_id = var.subnet_id
key_name = var.key
**tags = {
Name = "${var.name}-${count.index + 1}"**
}
}
tags attribute accepts a map of strings and you can also use terraform functions like merge to merge default tags if available in your used case with custom resource-specific tags.
# Block for create EC2 Instance
resource "aws_instance" "ec2" {
count = var.instance_count
ami = "ami-005835d578c62050d"
instance_type = "t2.micro"
vpc_security_group_ids = [var.security_group_id]
subnet_id = var.subnet_id
key_name = var.key
tags = merge(var.default_ec2_tags,
{
Name = "${var.name}-${count.index + 1}"
}
)
}
variable "default_ec2_tags" {
type = map(string)
description = "(optional) default tags for ec2 instances"
default = {
managed_by = "terraform"
environment = "dev"
}
}
Something very specific to terraform-aws-provider and a very handy feature is default_tags which you can configure on the provider level and these tags will be applied to all resources managed by the provider.
Click to view Tutorial from hashicorp on default-tags-in-the-terraform-aws-provider
It's not possible to get the OS type tag natively as mentioned by #Marcin already in the comments.
You can add other tags by simply adding to your Tags, For example:
tags = {
Name = "${var.name}-${count.index + 1}"
CreationDate = timestamp()
OS = "Linux"
}

can we create an ec2 image using terraform by ami name created packer?

I am trying my hands on terraform and packer.
I have created a costume image with packer.
I know we can create an image with ami id.
I have tried:
resource "aws_instance" "packer-yellowpages" {
ami = "*******"
instance_type = "t3.micro"
tags ={
Name = "demo"
}
}
I was wondering if we can do the same with ami name?
the reason I am thinking about this is: I read somewhere that cloud provider scrap the AMI id. So is there a way I can do this some other way apart from id.
OR implement some storage plan to access the store and access the image?
The aws_ami data source can be used to fetch information on an AMI based on tags such as "name".
data "aws_ami" "example" {
executable_users = ["self"]
most_recent = true
owners = ["self"]
filter {
name = "name"
values = ["myami-*"]
}
}
resource "aws_instance" "packer-yellowpages" {
ami = data.aws_ami.example.id
instance_type = "t3.micro"
tags ={
Name = "demo"
}
}
Try to use these instead to see if it works?
data "aws_ami" "example" {
executable_users = ["self"]
most_recent = true
owners = ["self"]
name_regex = "yellowpages"
}
It may be AMI Name, not Name tag.

Spin an AWS EC2 Spot instance with some validity using Terraform

I am trying to spin an AWS EC2 Spot instance with some validity (For example, Spot created should be accessible for 2hours or 3hours and the Spot instance should be terminated).
I am able to spin the spot instance using the below code but unable to set the duration/validity of the created Spot instance.
I am sharing my Terraform code (both main.tf and variable.tf) by which I am trying to spin a spot instance.
I tried to set the the expiry of the Spot instance using the below 2 lines of code in my main.tf file but did't work
valid_until = "${var.spot_instance_validity}"
terminate_instances_with_expiration = true
For valid_until , I couldn't able to give the RFC3339 format or YYYY-MM-DDTHH:MM:SSZ - calculating for 2 hour from the time when I spin the Spot instance. So removed the above 2 lines of code from my main.tf file
Below is the my main.tf file used to spin the spot instance
provider "aws" {
access_key = "${var.access_key}"
secret_key = "${var.secret_key}"
region = "${var.region}"
}
resource "aws_spot_instance_request" "dev-spot" {
ami = "${var.ami_web}"
instance_type = "t3.medium"
subnet_id = "subnet-xxxxxx"
associate_public_ip_address = "true"
key_name = "${var.key_name}"
vpc_security_group_ids = ["sg-xxxxxxx"]
spot_price = "${var.linux_spot_price}"
wait_for_fulfillment = "${var.wait_for_fulfillment}"
spot_type = "${var.spot_type}"
instance_interruption_behaviour = "${var.instance_interruption_behaviour}"
block_duration_minutes = "${var.block_duration_minutes}"
tags = {
Name = "dev-spot"
}
}
Below is the variable file "variable.tf"
variable "access_key" {
default = ""
}
variable "secret_key" {
default = ""
}
variable "region" {
default = "us-west-1"
}
variable "key_name" {
default = "win-key"
}
variable "windows_spot_price" {
type = "string"
default = "0.0309"
}
variable "linux_spot_price" {
type = "string"
default = "0.0125"
}
variable "wait_for_fulfillment" {
default = false
}
variable "spot_type" {
type = "string"
default = "one-time"
}
variable "instance_interruption_behaviour" {
type = "string"
default = "terminate"
}
variable "block_duration_minutes" {
type = "string"
default = "0"
}
variable "ami_web" {
default = "ami-xxxxxxxxxxxx"
}
The created Spot instance should have an validity to set as 1 hour or 2 hour which I can call from variable.tf file so the Spot instance should be Terminated by 1 hour or 2 hours (or Spot instance request should be cancelled)
Is there a way I can Spin aws ec2 Spot instance with expiry ?
It is not possible to schedule instances for termination.
However, you can use CloudWatch Events and Lambda to create your own instance termination logic. You need to create a scheduled event in Terraform according to your variable (valid_until), which invokes a Lambda function to terminate the instance.
AWS also has a solution called Instance Scheduler. You can simply attach tags to your spot instances to create start/stop schedules.
However, you should change instance stop behaviour in this case, which is by default shutdown, to terminate. Thus, your instances will be terminated when stopped. This can be achieved by using aws_instance.instance_initiated_shutdown_behavior argument in Terraform.

How to get the IP of the new instances after scaling?

I would like to get the IPs only of the new instances that Terraform has created after updating some existing infraestructure.
I have the next resource instance:
resource "aws_instance" "masters" {
count = "${var.masters_count}"
ami = "${var.aws_centos_ami}"
instance_type = "t2.medium"
......
availability_zone = "eu-west-1b"
root_block_device {
delete_on_termination = "${var.volume_delete_on_termination}"
}
tags {
Name = "master-${count.index}"
}
}
If I use the next "local-exec" command, it writes all the masters instances IPs on a file:
provisioner "local-exec" {
command = "echo \"${join("\n", aws_instance.masters.*.private_ip)}\" >> ../ansible-provision/inventory/hosts.ini"
}
I deploy this infraestructure with 5 instances. Then I want to add another 3 instances, so I change the "count" to 8.
How can I get the IPs of that 3 new instances?
Solution:
As I have some scripts that are run and cannot be made idempotent, then its easy enough to use ansible to put some additional ‘scaffolding’ around the non-idempotent elements with conditional execution of the scripts, so that they are only run once.
https://groups.google.com/forum/#!topic/terraform-tool/YVHReDbJ2Gw
Use null_resource:
resource "null_resource" "ips" {
triggers {
ids = "${join(",", aws_instance.masters.*.id)}"
}
provisioner "local-exec" {
...
}
}

Define tags in central section in TerraForm

I'm playing around with Terraform for a bit and I was wondering if this is possible. It's best practice to assign tags to each resource you create on AWS (for example). So, what you do first is come up with a tagging strategy (for example, which business unit, a name of the app, a team responsible for it, ...).
However, in Terraform, this means that you have to repeat each tags-block for each resource. This isn't very convenient and if you want to update 1 of the tag names, you have to update each resource that you created.
For example:
resource "aws_vpc" "vpc" {
cidr_block = "${var.cidr}"
tags {
Name = "${var.name}"
Project = "${var.projectname}"
Environment = "${var.environment}"
}
}
If I want to create a Subnet and EC2 in that VPC with the same tags, I have to repeat that tags-block. If I want to update 1 of the tag names later on, I have to update each resource individually, which is very time consuming and tedious.
Is there a possibility to create a block of tags in a centralized location and refer to that? I was thinking of Modules, but that doesn't seem to fit the definition of a module.
You can also try local values from version 0.10.3. It allows you to assign a symbolic local name to an expression so it can be used multiple times in configuration without repetition.
# Define the common tags for all resources
locals {
common_tags = {
Component = "awesome-app"
Environment = "production"
}
}
# Create a resource that blends the common tags with instance-specific tags.
resource "aws_instance" "server" {
ami = "ami-123456"
instance_type = "t2.micro"
tags = "${merge(
local.common_tags,
map(
"Name", "awesome-app-server",
"Role", "server"
)
)}"
}
Terraform version .12 onwords,
This is the variable
variable "sns_topic_name" {
type = string
default = "VpnTopic"
description = "Name of the sns topic"
}
This is the code
locals {
common_tags = {
Terraform = true
}
}
# Create a Resource
resource "aws_sns_topic" "sns_topic" {
name = var.sns_topic_name
tags = merge(
local.common_tags,
{
"Name" = var.sns_topic_name
}
)
}
Output will be
+ tags = {
+ "Name" = "VpnTopic"
+ "Terraform" = "true"
}
Terraform now natively support it for AWS provider.
Check out here
As of version 3.38.0 of the Terraform AWS Provider, the Terraform Configuration language also enables provider-level tagging. Reference Link
# Terraform 0.12 and later syntax
provider "aws" {
# ... other configuration ...
default_tags {
tags = {
Environment = "Production"
Owner = "Ops"
}
}
}
resource "aws_vpc" "example" {
# ... other configuration ...
# This configuration by default will internally combine tags defined
# within the provider configuration block and those defined here
tags = {
Name = "MyVPC"
}
}
For "aws_vpc.example" resouce below tags will be assigned, which is combination of tags defined under provider and tags defined under aws_vps section:
+ tags = {
+ "Environment" = "Production"
+ "Owner" = "Ops"
+ "Name" = "MyVPC"
}