I need to output private IP from Auto Scaling Group of EC2 instance that configuration by Terraform. How can i get private IP from ASG?
# Launch config for ASG
resource "aws_launch_configuration" "asg_conf" {
name = "ASG-Conf-${var.env}"
image_id = data.aws_ami.ubuntu.id # Get image from data
instance_type = var.instance_type # use t2.micro ad default
security_groups = [var.sg_app.id, var.sg_alb.id] # SG for APP
key_name = var.ssh_key # SSH key for connection to EC2
user_data = file("./modules/ec2/shell/apache.sh") # install apache
lifecycle {
create_before_destroy = true
}
}
# Auto Scaling Group
resource "aws_autoscaling_group" "asg" {
count = length(var.private_subnet_id) # count numbers of private subnets
name = "ASG-${var.env}-${count.index + 1}"
vpc_zone_identifier = [var.private_subnet_id[count.index]]
launch_configuration = aws_launch_configuration.asg_conf.name
target_group_arns = [var.alb_target.arn]
min_size = 1 # Min size of creating EC2
max_size = 1 # Max size of creating EC2
health_check_grace_period = 120
health_check_type = "ELB"
force_delete = true
lifecycle {
create_before_destroy = true
}
tag {
key = "Name"
value = "Webserver-ec2-${count.index + 1}"
propagate_at_launch = true
}
}
output {
# How can i get this???
value = aws_autoscaling_group.asg.private_ip
}
My infrastructure create 1 EC2 instance in 2 private AZ by ASG and i need output IP from this ASG
You need custom data source to get the IPs. But this really does not make much sense as instances in ASG can be changed by AWS at any time, thus making your IPs obsolete.
Related
When one creates an ASG (Auto Scaling Group) in AWS Console there is option which can be checked "receive traffic from one or more load balancers"?
I was trying to do same using the "aws_autoscaling_attachment" resource, however I'm getting error below. I can see that the "MyALBWP" is present in the console.
ERROR: Failure attaching AutoScaling Group MyWPReaderNodesASGroup with Elastic Load Balancer: arn:aws:elasticloadbalancing:eu-west-2:262702952852:loadbalancer/app/MyALBWP/ef1dd71d87b8742b:
ValidationError: Provided Load Balancers may not be valid. Please ensure they exist and try again.
resource "aws_launch_configuration" "MyWPLC" {
name = "MyWPLCReaderNodes"
#count = 2 Was giving error as min, max size is mentioned in ASG
#name_prefix = "LC-" Error: "name_prefix": conflicts with name
image_id = aws_ami_from_instance.MyWPReaderNodes.id
instance_type = "t2.micro"
iam_instance_profile = aws_iam_instance_profile.MyWebInstanceProfile2.name # Attach S3 role to EC2 Instance
security_groups = [aws_security_group.WebDMZ.id] # Attach WebDMZ SG
user_data = file("./AutoScaleLaunch.sh")
lifecycle {
#prevent_destroy = "${var.prevent_destroy}"
create_before_destroy = true
}
# tags = { NOT VALID GIVES ERROR
# Name = "MyWPLC"
# }
}
# # Create AutoScaling Group for Reader Nodes
# Name: MyWPReaderNodesASGroup
# Launch Configuration : MyWPLC
# Group Size : 2
# Network : Select your VPC
# Subnets : Select your public Subnets
# Receive traffic from Load Balancer <<< Tried in "aws_autoscaling_attachment" gives
# Target Group : MyWPInstances
# Health Check : ELB or EC2, Select ELB
# Health check grace period : 60 seconds
# tags name MyWPReaderNodesGroup
resource "aws_autoscaling_group" "MyWPReaderNodesASGroup" {
name = "MyWPReaderNodesASGroup"
# We want this to explicitly depend on the launch config above
depends_on = [aws_launch_configuration.MyWPLC]
max_size = 2
min_size = 2
health_check_grace_period = 60
health_check_type = "ELB"
desired_capacity = 2
force_delete = true
launch_configuration = aws_launch_configuration.MyWPLC.id
vpc_zone_identifier = [aws_subnet.PublicSubNet1.id, aws_subnet.PublicSubNet2.id]
target_group_arns = [aws_lb_target_group.MyWPInstancesTG.arn] # A list of aws_alb_target_group ARNs, for use with Application or Network Load Balancing.
#target_group_arns = [aws_lb.MyALBWP.id] # A list of aws_alb_target_group ARNs, for use with Application or Network Load Balancing.
#error: ValidationError: Provided Target Groups may not be valid. Please ensure they exist and try again.
# tags = { NOT REQUIRED GIVES ERROR : Error : Inappropriate value for attribute "tags": set of map of string required.
# Name = "MyWPReaderNodesGroup"
# }
}
# Create a new load balancer attachment
# ERROR: Failure attaching AutoScaling Group MyWPReaderNodesASGroup with Elastic Load Balancer: arn:aws:elasticloadbalancing:eu-west-2:262702952852:loadbalancer/app/MyALBWP/ef1dd71d87b8742b:
# ValidationError: Provided Load Balancers may not be valid. Please ensure they exist and try again.
resource "aws_autoscaling_attachment" "asg_attachment_elb" {
autoscaling_group_name = aws_autoscaling_group.MyWPReaderNodesASGroup.id
elb = aws_lb.MyALBWP.id
}
NOTE on AutoScaling Groups and ASG Attachments: Terraform currently provides both a standalone ASG Attachment resource (describing an ASG attached to an ELB), and an AutoScaling Group resource with load_balancers defined in-line. At this time you cannot use an ASG with in-line load balancers in conjunction with an ASG Attachment resource. Doing so will cause a conflict and will overwrite attachments.
From Resource: aws_autoscaling_attachment docs.
You have two options:
Delete the aws_autoscaling_attachment resource
Remove the target_group_arns argument from the aws_autoscaling_group resource, remove use the elb argument from the aws_autoscaling_attachment resource, and add alb_target_group_arn to the aws_autoscaling_attachment resource
Option 1 looks like this:
resource "aws_autoscaling_group" "MyWPReaderNodesASGroup" {
name = "MyWPReaderNodesASGroup"
# We want this to explicitly depend on the launch config above
depends_on = [aws_launch_configuration.MyWPLC]
max_size = 2
min_size = 2
health_check_grace_period = 60
health_check_type = "ELB"
desired_capacity = 2
force_delete = true
launch_configuration = aws_launch_configuration.MyWPLC.id
vpc_zone_identifier = [aws_subnet.PublicSubNet1.id, aws_subnet.PublicSubNet2.id]
target_group_arns = [aws_lb_target_group.MyWPInstancesTG.arn] # A list of aws_alb_target_group ARNs, for use with Application or Network Load Balancing.
}
Option 2 looks like this:
resource "aws_autoscaling_group" "MyWPReaderNodesASGroup" {
name = "MyWPReaderNodesASGroup"
# We want this to explicitly depend on the launch config above
depends_on = [aws_launch_configuration.MyWPLC]
max_size = 2
min_size = 2
health_check_grace_period = 60
health_check_type = "ELB"
desired_capacity = 2
force_delete = true
launch_configuration = aws_launch_configuration.MyWPLC.id
vpc_zone_identifier = [aws_subnet.PublicSubNet1.id, aws_subnet.PublicSubNet2.id]
}
resource "aws_autoscaling_attachment" "asg_attachment_elb" {
autoscaling_group_name = aws_autoscaling_group.MyWPReaderNodesASGroup.id
alb_target_group_arn = aws_lb_target_group.MyWPInstancesTG.arn
}
The aws_autoscaling_attachment resource should use the alb_target_group_arn parameter. You can use the aws_lb_target_group.MyWPInstancesTG.arn parameter you used for creating your autoscaling group.
The elb parameter is for classic load balancers, not application load balancers.
More information is available here.
I am trying to create 3 EC2 instances with two private IPs attached to eth0 and eth1 using terraform.
Can you suggest the correct Terraform resource I need to use to create and attach secondary private IP address' to each of the EC2 machines?
I know by default it creates eth0 and attaches a private IP address, I am looking to create eth1 as part of instance creation and attach a private IP from a different subnet.
resource "aws_instance" "test" {
count = "${var.instance_count["test"]}"
ami = "${var.ami}"
instance_type = "${var.instance_type}"
key_name = "${var.key_name}"
vpc_security_group_ids = ["${aws_security_group.kafka_sg.id}"]
associate_public_ip_address = "${var.associate_public_ip_address}"
ebs_optimized = "${var.ebs_optimized}"
disable_api_termination = "${var.disable_api_termination}"
subnet_id = "${var.subnet_id}"
user_data = "${base64encode(file("${path.module}/mount.sh"))}"
tags = {
Name = "test-${var.instance_prefix}-${format("%02d", count.index+1)}"
}
root_block_device {
volume_type = "${var.root_volume_type}"
volume_size = "${var.root_volume_size}"
}
ebs_block_device{
device_name = "/dev/sdb"
volume_size = 10
volume_type = "gp2"
}
}
Add an Elastic Network Interface to the server by creating the ENI via aws_network_interface, and then attaching it via aws_network_interface_attachment
I have a private subnet with default route targeting to a nat-gateway. Both were created by terraform.
Now I have another code to raise an ec2 to use as NAT in my VPC (as cloud-nat-gateway become very expensive). I'm trying to change the default route in my rtb to this new ec2 and getting the error below:
Error: Error applying plan:
1 error occurred:
* module.ec2-nat.aws_route.defaultroute_to_ec2-nat: 1 error occurred:
* aws_route.defaultroute_to_ec2-nat: Error creating route: RouteAlreadyExists: The route identified by 0.0.0.0/0 already exists.
status code: 400, request id: 408deb59-d223-4c9f-9a28-209e2e0478e9
I know this route already exists, but how to change this already existing route to a new target? In this case my new ec2 network interface?
Thanks for your help.
Follow are the code I'm using:
#####################
# FIRST TERRAFORM
# create the internet gateway
resource "aws_internet_gateway" "this" {
count = "${var.create_vpc && length(var.public_subnets) > 0 ? 1 : 0}"
vpc_id = "${aws_vpc.this.id}"
tags = "${merge(map("Name", format("%s", var.name)), var.igw_tags, var.tags)}"
}
# Add default route (0.0.0.0/0) to internet gateway
resource "aws_route" "public_internet_gateway" {
count = "${var.create_vpc && length(var.public_subnets) > 0 ? 1 : 0}"
route_table_id = "${aws_route_table.public.id}"
destination_cidr_block = "0.0.0.0/0"
gateway_id = "${aws_internet_gateway.this.id}"
timeouts {
create = "5m"
}
}
#####################
# SECOND TERRAFORM
# Spin EC2 to run as NAT
resource "aws_instance" "ec2-nat" {
count = "${var.instance_qtd}"
ami = "${data.aws_ami.nat.id}"
availability_zone = "${var.region}a"
instance_type = "${var.instance_type}"
key_name = "${var.aws_key_name}"
vpc_security_group_ids = ["${var.sg_ec2}","${var.sg_ops}"]
subnet_id = "${var.public_subnet_id}"
iam_instance_profile = "${var.iam_instance_profile}"
associate_public_ip_address = true
source_dest_check = false
tags = {
Name = "ec2-nat-${var.brand}-${var.role}-${count.index}"
Brand = "${var.brand}"
Role = "${var.role}"
Type = "ec2-nat"
}
}
# Add default route (0.0.0.0/0) to aws_instance.ec2-nat
variable "default_route" {
default = "0.0.0.0/0"
}
resource "aws_route" "defaultroute_to_ec2-nat" {
route_table_id = "${var.private_route_id}"
destination_cidr_block = "${var.default_route}"
instance_id = "${element(aws_instance.ec2-nat.*.id, 0)}"
}
We have a requirement to create a ec2 module and use it to create a ec2 instances (1 or more) + ebs device/ebs volume and use the same ec2 module to create ec2 (1 or more) w/o any ebs volumes.
I tried it via conditional (count) but hitting all sorts of errors. Help!
When trying to conditionally create a resource, you can use a ternary to calculate the count parameter.
A few notes:
When using count, the aws_instance.example, aws_ebs_volume.ebs-volume-1, and aws_ebs_volume.ebs-volume-2 resources will be arrays.
When attaching the EBS volumes to the instances, since the aws_volume_attachment resources have a count, you can think of them as iterating the arrays to attach the volume to the EC2 instances.
You can use count.index to extract the correct item from the array of the EC2 instances and the two EBS volume resources. For each value of count, the block is executed once.
variable "create_ebs" {
default = false
}
variable "instance_count" {
default = "1"
}
resource "aws_instance" "example" {
count = "${var.instance_count}"
ami = "ami-1"
instance_type = "t2.micro"
subnet_id = "subnet-1" #need to have more than one subnet
}
resource "aws_ebs_volume" "ebs-volume-1" {
count = "${var.create_ebs ? var.instance_count : 0}"
availability_zone = "us-east-1a" #use az based on the subnet
size = 10
type = "standard"
}
resource "aws_ebs_volume" "ebs-volume-2" {
count = "${var.create_ebs ? var.instance_count : 0}"
availability_zone = "us-east-1a"
size = 10
type = "gp2"
}
resource "aws_volume_attachment" "ebs-volume-1-attachment" {
count = "${var.create_ebs ? var.instance_count : 0}"
device_name = "/dev/sdf${count.index}"
volume_id = "${element(aws_ebs_volume.ebs-volume-1.*.id, count.index)}"
instance_id = "${element(aws_instance.example.*.id, count.index)}"
}
resource "aws_volume_attachment" "ebs-volume-2-attachment" {
count = "${var.create_ebs ? var.instance_count : 0}"
device_name = "/dev/sdg${count.index}"
volume_id = "${element(aws_ebs_volume.ebs-volume-2.*.id, count.index)}"
instance_id = "${element(aws_instance.example.*.id, count.index)}"
}
For more info on count.index you can search for it on the Terraform interpolation page
I have a Terraform script that launches VPC, subnets, database, autoscaling and some other stuff. Autoscaling uses default Windows Server 2012 R2 images to launch new instances (including the initial ones). Every instance is executing Chef install after launch. I need to log into the instance so i can confirm that Chef is installed but i dont have any .pem keys. How do i launch an instance with Autoscaling and launch_configuration and output .pem file so i can login afterwards?
Here is my autoscaling part of the script:
resource "aws_autoscaling_group" "asgPrimary" {
depends_on = ["aws_launch_configuration.primary"]
availability_zones = ["${data.aws_availability_zones.available.names[0]}"]
name = "TerraformASGPrimary"
max_size = 1
min_size = 1
wait_for_capacity_timeout = "0"
health_check_grace_period = 300
health_check_type = "ELB"
desired_capacity = 1
force_delete = false
wait_for_capacity_timeout = "0"
vpc_zone_identifier = ["${aws_subnet.private_primary.id}"]
#placement_group = "${aws_placement_group.test.id}"
launch_configuration = "${aws_launch_configuration.primary.name}"
load_balancers = ["${aws_elb.elb.name}"]
}
and this is my launch configuration:
resource "aws_launch_configuration" "primary" {
depends_on = ["aws_subnet.primary"]
name = "web_config_primary"
image_id = "${data.aws_ami.amazon_windows_2012R2.id}"
instance_type = "${var.ami_type}"
security_groups = ["${aws_security_group.primary.id}"]
user_data = "${template_file.user_data.rendered}"
}
I need to avoid using Amazon CLI or the webpage itself - the point is all that to be automated for reusing in all my other solutions.
The .pem files used to RDS/SSH into an EC2 instance are not generated during launch of an EC2 instance. It may appear like this when using the AWS Management Console, but in actuality, the Key Pair is generated first, and then that Key Pair is assigned to the EC2 instance during launch.
To get your .pem file, first:
Generate a new Key Pair. See Amazon EC2 Key Pairs. When you do this, you will be able to download the .pem file.
Assign that Key Pair to your Auto Scaling Group's launch configuration using the key_name argument.
Here's an example:
resource "aws_launch_configuration" "primary" {
depends_on = ["aws_subnet.primary"]
name = "web_config_primary"
image_id = "${data.aws_ami.amazon_windows_2012R2.id}"
instance_type = "${var.ami_type}"
security_groups = ["${aws_security_group.primary.id}"]
user_data = "${template_file.user_data.rendered}",
key_name = "my-key-pair"
}
See: https://www.terraform.io/docs/providers/aws/r/launch_configuration.html#key_name