Assuming I have an existing Elastic IP on my AWS account.
For reasons beyond the scope of this question, this EIP is not (and cannot) be managed via Terraform.
I know want to assign this EIP (say 11.22.33.44) to an EC2 instance I create via TF
The traditional approach would to of course create both the EIP and the EC2 instance via TF
resource "aws_eip" "my_instance_eip" {
instance = "my_instance.id"
vpc = true
}
resource "aws_eip_association" "my_eip_association" {
instance_id = "my_instance.id"
allocation_id = "aws_eip.my_instance_eip.id"
}
Is there a way however to let EC2 know via TF that it should be assigned as EIP, 11.22.33.44 that is outside of TF lifecycle?
You can use aws_eip data source to get info of your existing EIP and then use that in your aws_eip_association:
data "aws_eip" "my_instance_eip" {
public_ip = "11.22.33.44"
}
resource "aws_eip_association" "my_eip_association" {
instance_id = aws_instance.my_instance.id
allocation_id = data.aws_eip.my_instance_eip.id
}
Related
I have 3 existing EBS volumes that I am trying to attach to instances created with Autoscaling groups. Below is Terraform code on how the EBS volumes are defined:
EBS Volumes
resource "aws_ebs_volume" "volumes" {
count = "${(var.enable ? 1 : 0) * var.number_of_zones}"
availability_zone = "${element(var.azs, count.index)}"
size = "${var.volume_size}"
type = "${var.volume_type}"
lifecycle {
ignore_changes = [
"tags",
]
}
tags {
Name = "${var.cluster_name}-${count.index + 1}"
}
}
My plan is to first use the Terraform import utility so the volumes can be managed my Terraform. Without doing this import, Terraform assumes I am trying to create new EBS volumes which I do not want.
Additionally, I discovered this aws_volume_attachment resource to attach these volumes to instances. I'm struggling to determine what value to put as the instance_id in this resource:
Volume Attachment
resource "aws_volume_attachment" "volume_attachment" {
count = length("${aws_ebs_volume.volumes.id}")
device_name = "/dev/sdf"
volume_id = aws_ebs_volume.volumes.*.id
instance_id = "instance_id_from_autoscaling_group"
}
Additionally, the launch configuration block has an ebs_volume_device block, do I need anything else included in this block? Any advice on this matter would be helpful, as I am having some trouble.
ebs_block_device {
device_name = "/dev/sdf"
no_device = true
}
I'm struggling to determine what value to put as the instance_id in this resource
If you create ASG using TF, you don't have access to the instance IDs. The reason is that ASG is treated as one entity, rather then individual instances.
The only way to get the instance ids from ASG created would be through external data resource or lambda function source.
But even if you could theoretically do it, instances in ASG should be treated as fully disposable, interchangeable and identical. You should not need to customize them, as they can be terminated and replaced at any time by AWS's AZ rebalancing, instance failures or scaling activities.
ok, so I am trying to attach an EBS volume which I have created using Terraform to an ASG's instance using userdata, but now issue is both are in different AZ's, due to which, it failing to attach. Below is the steps I am trying and failing:
resource "aws_ebs_volume" "this" {
for_each = var.ebs_block_device
size = lookup(each.value,"volume_size", null)
type = lookup(each.value,"volume_type", null)
iops = lookup(each.value, "iops", null)
encrypted = lookup(each.value, "volume_encrypt", null)
kms_key_id = lookup(each.value, "kms_key_id", null)
availability_zone = join(",",random_shuffle.az.result)
}
In above resource, I am using random provider to get one AZ from list of AZs, and same list is provided to ASG resource below:
resource "aws_autoscaling_group" "this" {
desired_capacity = var.desired_capacity
launch_configuration = aws_launch_configuration.this.id
max_size = var.max_size
min_size = var.min_size
name = var.name
vpc_zone_identifier = var.subnet_ids // <------ HERE
health_check_grace_period = var.health_check_grace_period
load_balancers = var.load_balancer_names
target_group_arns = var.target_group_arns
tag {
key = "Name"
value = var.name
propagate_at_launch = true
}
}
And here is userdata which I am using:
TOKEN=`curl -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600"`
instanceId = curl -H "X-aws-ec2-metadata-token: $TOKEN" http://169.254.169.254/latest/meta-data/instance-id
aws ec2 attach-volume --volume-id ${ebs_volume_id} --instance-id $instanceId --device /dev/nvme1n1
Above will attach the newly created volume, as I am passing output ${ebs_volume_id} of above resource.
But, its failing because instance and volume are in different AZs.
Can anyone help me on this as a better solution than hardcoding AZ on both ASG and Volume?
I'd have to understand more about what you're trying to do to solve this with just the aws provider and terraform. And honestly, most ideas are going to be a bit complex.
You could have an ASG per AZ. Otherwise, the ASG is going to select some AZ at each launch. And you'll have more instances in an AZ than you have volumes and volumes in other AZs with no instances to attach to.
So you could create a number of volumes per az and an ASG per AZ. Then the userdata should list all the volumes in the AZ that are not attached to an instance. Then pick the id of the first volume that is unattached. Then attach it. If all are attached, you should trigger your alerting because you have more instances than you have volumes.
Any attempt to do this with a single ASG is really an attempt at writing your own ASG but doing it in a way that fights with your actual ASG.
But there is a company who offers managing this as a service. They also help you manage them as spot instances to save cost: https://spot.io/
The elastigroup resource is an ASG managed by them. So you won't have an aws asg anymore. But they have some interesting stateful configurations.
We support instance persistence via the following configurations. all values are boolean. For more information on instance persistence please see: Stateful configuration
persist_root_device - (Optional) Boolean, should the instance maintain its root device volumes.
persist_block_devices - (Optional) Boolean, should the instance maintain its Data volumes.
persist_private_ip - (Optional) Boolean, should the instance maintain its private IP.
block_devices_mode - (Optional) String, determine the way we attach the data volumes to the data devices, possible values: "reattach" and "onLaunch" (default is onLaunch).
private_ips - (Optional) List of Private IPs to associate to the group instances.(e.g. "172.1.1.0"). Please note: This setting will only apply if persistence.persist_private_ip is set to true
stateful_deallocation {
should_delete_images = false
should_delete_network_interfaces = false
should_delete_volumes = false
should_delete_snapshots = false
}
This allows you to have an autoscaler that preserves volumes and handles the complexities for you.
I am trying to spin up an ECS cluster with Terraform, but can not make EC2 instances register as container instances in the cluster.
I first tried with the verified module from Terraform, but this seems out dated (ecs-instance-profile has wrong path).
Then I tried with another module from anrim, but still no container instances. Here is the script I used:
provider "aws" {
region = "us-east-1"
}
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "2.21.0"
name = "ecs-alb-single-svc"
cidr = "10.10.10.0/24"
azs = ["us-east-1a", "us-east-1b", "us-east-1c"]
private_subnets = ["10.10.10.0/27", "10.10.10.32/27", "10.10.10.64/27"]
public_subnets = ["10.10.10.96/27", "10.10.10.128/27", "10.10.10.160/27"]
tags = {
Owner = "user"
Environment = "me"
}
}
module "ecs_cluster" {
source = "../../modules/cluster"
name = "ecs-alb-single-svc"
vpc_id = module.vpc.vpc_id
vpc_subnets = module.vpc.private_subnets
tags = {
Owner = "user"
Environment = "me"
}
}
I then created a new ecs cluster (from the aws console) on the same VPC and carefully compared the differences in resources. I managed to find some small differences, fixed them and tried again. But still no container instances!
A fork of the module is available here.
Can you see instances being created in the autoscaling group? If so, I'd suggest SSHing to one of them (either directly or using a bastion host, eg. see this module) and checking ECS agent logs. In my experience those problems are usually related to IAM policies, and that's pretty visible in logs but YMMV.
I'm brand new to Terraform so I'm sure i'm missing something, but the answers i'm finding don't seem to be asking the same question I have.
I have an AWS VPC/Security Group that we need our EC2 instances to be created under and this VPC/SG is already created. To create an EC2 instance, Terraform requires that if I don't have a default VPC, I must import my own. But once I import and apply my plan, when I wish to destroy it, its trying to destroy my VPC as well. How do I encapsulate my resources so when I run "terraform apply", I can create an EC2 instance with my imported VPC, but when I run "terraform destroy" I only destroy my EC2 instance?
In case anyone wants to mention, I understand that:
lifecycle = {
prevent_destroy = true
}
is not what I'm looking for.
Here is my current practice code.
resource "aws_vpc" "my_vpc" {
cidr_block = "xx.xx.xx.xx/24"
}
provider "aws" {
region = "us-west-2"
}
data "aws_ami" "ubuntu" {
most_recent = true
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-bionic-18.04-amd64-server-*"]
}
owners = ["099720109477"] # Canonical
}
resource "aws_instance" "web" {
ami = "${data.aws_ami.ubuntu.id}"
instance_type = "t3.nano"
vpc_security_group_ids = ["sg-0e27d851dxxxxxxxxxx"]
subnet_id = "subnet-0755c2exxxxxxxx"
tags = {
Name = "HelloWorld"
}
}
Terraform should not require you to deploy or import a VPC in order to deploy an EC2 instance into it. You should be able to reference the VPC, subnets and security groups by id so TF is aware of your existing network infrastructure just like you've already done for SGs and subnets. All you should need to deploy the EC2 instance "aws_instance" is give it an existing subnet id in the existing VPC like you already did. Why do you say deploying or importing a VPC is required by Terraform? What error or issue do you have deploying without the VPC and just using the existing one?
You can protect the VPC through AWS if you really wanted to, but I don't think you really want to import the VPC into your Terraform state and let Terraform manage it here. Sounds like you want the VPC to service other resources, maybe applications manually deployed or through other TF stacks, and the VPC to live independent of anyone application deployment.
To answer the original question, you can use a data source and match your VPC by id or tag name :
data "aws_vpc" "main" {
tags = {
Name = "main_vpc"
}
}
Or
data "aws_vpc" "main" {
id = "vpc-nnnnnnnn"
}
Then refer to it with : data.aws_vpc.main
Also, if you already included your VPC but would like not to destroy it while remove it from your state, you can manage to do it with the terraform state command : https://www.terraform.io/docs/commands/state/index.html
Objective:
I'm trying to create a number of EC2 instances and assign them each a static private ip address from a map variable. This address must be the primary address, and basically means I'm not using the DHCP assigned address provided by aws.
Issue:
The terraform plan succeeds, and I can create instances which show the static IP address assigned, together with the DHCP assigned address (in the aws console). When I ssh into the instance I see the primary address is the DHCP assigned address. there is a second ENI attached to the instance (eth1) but the static ip address is not there.
The Question:
How to have terraform create the instance, using the statically assigned IP address for the primary interface (eth0), instead of the default assigned DHCP address? Basically, how can I either:
a) Create this interface with statically assigned interface at instance creation time or,
b) Replace the existing primary interface IP address with a static one (or the entire ENI itself with a newly created one with the static IP address)
Here are the details:
I am using the module as follows in a separate main.tf file:
a) create the instance
module "ec2-hadoop-manager" {
source = "../modules/ec2"
ami_id = "${var.dw_manager["ami"]}"
instance_type = "${var.dw_manager["instance_type"]}"
aws_region = "${var.region}"
availability_zone = "eu-west-1a"
associate_public_ip_address = true
role = "hadoop-manager"
env = "${var.environment}"
vpc = "${var.vpc_id}"
security_group_ids = "${var.aws_security_group_id["sg_id"]}"
key_name = "${var.key_name}"
subnet_id = "${var.default_subnet}"
number_of_instances = "${var.dw_manager["count"]}"
}
b) I'm Assigning private IPs to the instance using a resource (outside the module bloc in main.tf):
resource "aws_network_interface" "private_ip" {
count = "${var.dw_manager["count"]}"
subnet_id = "${var.default_subnet}"
private_ips = ["${lookup(var.dw_manager_ips, count.index)}"]
security_groups = "${var.aws_security_group_id["sg_id"]}"
attachment {
instance = "${element(split(",", module.ec2-hadoop-manager.ec2_instance_ip), count.index)}"
device_index = 1
}
}
NOTE: I have tried changing device_index to 0, but as expected, aws complains that there is already an eni attached at index 0:
Error applying plan:
3 error(s) occurred:
* aws_network_interface.private_ip.0: Error attaching ENI: InvalidParameterValue: Instance 'i-09f1371f798c2f6b3' already has an interface attached at device index '0'.
status code: 400, request id: ed1737d9-5342-491a-85a5-e49e70b7503d
* aws_network_interface.private_ip.2: Error attaching ENI: InvalidParameterValue: Instance 'i-012bda6948bbe00c9' already has an interface attached at device index '0'.
status code: 400, request id: 794c04fb-9089-4ad0-8f5d-ba572777575a
* aws_network_interface.private_ip.1: Error attaching ENI: InvalidParameterValue: Instance 'i-00ac215801de3aba8' already has an interface attached at device index '0'.
status code: 400, request id: cbd9e36d-145f-45d4-934f-0a9c2f6e7768
Some additional information that may be useful: Link to my full module definition files:
main definition file for the ec2 module:
http://pastebin.com/zXrZRrQ5
main outputs.tf definition file for ec2 module:
http://pastebin.com/9t5zjfQS
Terraform v0.9.4 shipped with a new feature on aws_instance that let's you assign to device index 0, via a new config option network_interface:
https://www.terraform.io/docs/providers/aws/r/instance.html#network-interfaces
There's also the aws_network_interface_attachment, but I believe you want the new network_interface option for aws_instance above.
Example config:
resource "aws_network_interface" "foo" {
subnet_id = "${aws_subnet.my_subnet.id}"
private_ips = ["172.16.10.100"]
tags {
Name = "primary_network_interface"
}
}
resource "aws_instance" "foo" {
ami = "ami-22b9a343" // us-west-2
instance_type = "t2.micro"
network_interface {
network_interface_id = "${aws_network_interface.foo.id}"
device_index = 0
}
}
I don't know if you actually want to do what you're doing. From your description I would expect a single network interface with a "static" IP.
Now what I would do is use the private_ip parameter of aws_instance, which simply sets the fixed IP for an EC2 host. Done.
What you seem to try is to create the host without a static IP in the aws_instance (you didn't show it, so I just assume), which will IMHO always give the host a dynamic IP. Then you create a second network interface, which you give a static IP, and attach it to the host. This will not be the first (or "primary") interface of the instance, but it has a static IP, and according to your description this is what happens.
I am pretty sure you will always end up with a DHCP IP address when you don't specify your static IP in your host definition.
So either you provide more information about your use case, or more code, but the way you do things is just wrong for what you want if I understand you correctly.
I know this question is old, but I had something useful to add. The address for an EC2 instance is always from DHCP. AWS uses DHCP reservations to assign IP addresses. The only difference between a dynamically assigned address and "static" private address is that in the former AWS is randomly picking from the available addresses for the reservation and in the latter you're picking it. The way it gets applied is exactly the same for both, so you'll never see a static IP on the box.