I am trying to create ec2 instance through auto_scaling_group on terraform
I have something like:
resource "aws_ecs_cluster" "my_cluster" {
name = "my-cluster"
}
resource "aws_autoscaling_group" "my_instances" {
name = "my-instances"
min_size = 1
max_size = 2
availability_zones = ["us-east-1a"]
launch_configuration = "${aws_launch_configuration.my_ecs_instance.id}"
}
resource "aws_launch_configuration" "my_ecs_instance" {
name_prefix = "my-ecs-instance"
instance_type = "t2.micro"
image_id = "ami-19e8cc0e"
}
Terraform plan -var-file=mykey.tfvars
works fine but
Terraform apply -var-file=mykey.tfvars
will stock in creating the instance like
aws_autoscaling_group.my_instances: Still creating... (9m20s elapsed)
aws_autoscaling_group.my_instances: Still creating... (9m30s elapsed)
aws_autoscaling_group.my_instances: Still creating... (9m40s elapsed)
eventually time out and saying
aws_autoscaling_group.my_instances: "my-instances"
Waiting up to 10m0s: Need at least 1 healthy instances in ASG, have 0. Most recent activity:
..more..
StatusMessage: "No default VPC for this user. Launching EC2 instance failed."
I think I need to specify vpc id but I don't find auto_scaling_group has vpc_id attribute.
I am not sure how to fix this, can someone help me about it? Thanks a lot!
This wait is because the autoscalling group is waiting for at least one ec2 instance to up and running as defined in the auto scaling group but there is none. This resulted in the error which mentioned the root cause "No default VPC for this user". So basically, there is no ec2 up and runing because there is no VPC, subnet and/or VPC identifier associated with the autoscaling group.
To resolve:
First if you haven't done this, you will need to create a VPC with the vpc resource "aws_vpc"
Next create a subnet with the subnet resources "aws_subnet"
Next associate the VPC identifier "vpc_zone_identifier" with the auto scaling group in "aws_autoscaling_group" resource area
The identifier should look like below where "aws_subnet.main-public-1" is the subnet ID created in step 2
vpc_zone_identifier = ["${aws_subnet.main-public-1.id}"
I hope that helps
The error message is: StatusMessage: "No default VPC for this user. Launching EC2 instance failed."
You need to create VPC with subnets and provide the subnet ids when creating autoscaling group.
Think about to add vpc_zone_identifier
vpc_zone_identifier (Optional) A list of subnet IDs to launch resources in.
https://www.terraform.io/docs/providers/aws/r/autoscaling_group.html#vpc_zone_identifier
Related
I'm quite new to Terraform, and struggling with something.
I'm playing around with Redshift for a personal project, and I want to update the inbound security rules for the default security group which is applied to Redshift when it's created.
If I were doing it in AWS Console, I'd be adding a new inbound rule with Type being All Traffic and Source being Anywhere -IPv4 which adds 0.0.0.0/0.
Below in main.tf I've tried to create a new security group and apply that to Redshift, but I get a VPC-by-Default customers cannot use cluster security groups error.
What is it I'm doing wrong?
resource "aws_redshift_cluster" "redshift" {
cluster_identifier = "redshift-cluster-pipeline"
skip_final_snapshot = true terraform destroy
master_username = "awsuser"
master_password = var.db_password
node_type = "dc2.large"
cluster_type = "single-node"
publicly_accessible = "true"
iam_roles = [aws_iam_role.redshift_role.arn]
cluster_security_groups = [aws_redshift_security_group.redshift-sg.name]
}
resource "aws_redshift_security_group" "redshift-sg" {
name = "redshift-sg"
ingress {
cidr = "0.0.0.0/0"
}
The documentation for the Terraform resource aws_redshift_security_group states:
Creates a new Amazon Redshift security group. You use security groups
to control access to non-VPC clusters
The error message you are receiving is clearly staging that you are using the wrong type of security group, and you need to use a VPC security group instead. Once you create the appropriate VPC security group, you would set it in the aws_redshift_cluster resource via the vpc_security_group_ids property.
ok, so I am trying to attach an EBS volume which I have created using Terraform to an ASG's instance using userdata, but now issue is both are in different AZ's, due to which, it failing to attach. Below is the steps I am trying and failing:
resource "aws_ebs_volume" "this" {
for_each = var.ebs_block_device
size = lookup(each.value,"volume_size", null)
type = lookup(each.value,"volume_type", null)
iops = lookup(each.value, "iops", null)
encrypted = lookup(each.value, "volume_encrypt", null)
kms_key_id = lookup(each.value, "kms_key_id", null)
availability_zone = join(",",random_shuffle.az.result)
}
In above resource, I am using random provider to get one AZ from list of AZs, and same list is provided to ASG resource below:
resource "aws_autoscaling_group" "this" {
desired_capacity = var.desired_capacity
launch_configuration = aws_launch_configuration.this.id
max_size = var.max_size
min_size = var.min_size
name = var.name
vpc_zone_identifier = var.subnet_ids // <------ HERE
health_check_grace_period = var.health_check_grace_period
load_balancers = var.load_balancer_names
target_group_arns = var.target_group_arns
tag {
key = "Name"
value = var.name
propagate_at_launch = true
}
}
And here is userdata which I am using:
TOKEN=`curl -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600"`
instanceId = curl -H "X-aws-ec2-metadata-token: $TOKEN" http://169.254.169.254/latest/meta-data/instance-id
aws ec2 attach-volume --volume-id ${ebs_volume_id} --instance-id $instanceId --device /dev/nvme1n1
Above will attach the newly created volume, as I am passing output ${ebs_volume_id} of above resource.
But, its failing because instance and volume are in different AZs.
Can anyone help me on this as a better solution than hardcoding AZ on both ASG and Volume?
I'd have to understand more about what you're trying to do to solve this with just the aws provider and terraform. And honestly, most ideas are going to be a bit complex.
You could have an ASG per AZ. Otherwise, the ASG is going to select some AZ at each launch. And you'll have more instances in an AZ than you have volumes and volumes in other AZs with no instances to attach to.
So you could create a number of volumes per az and an ASG per AZ. Then the userdata should list all the volumes in the AZ that are not attached to an instance. Then pick the id of the first volume that is unattached. Then attach it. If all are attached, you should trigger your alerting because you have more instances than you have volumes.
Any attempt to do this with a single ASG is really an attempt at writing your own ASG but doing it in a way that fights with your actual ASG.
But there is a company who offers managing this as a service. They also help you manage them as spot instances to save cost: https://spot.io/
The elastigroup resource is an ASG managed by them. So you won't have an aws asg anymore. But they have some interesting stateful configurations.
We support instance persistence via the following configurations. all values are boolean. For more information on instance persistence please see: Stateful configuration
persist_root_device - (Optional) Boolean, should the instance maintain its root device volumes.
persist_block_devices - (Optional) Boolean, should the instance maintain its Data volumes.
persist_private_ip - (Optional) Boolean, should the instance maintain its private IP.
block_devices_mode - (Optional) String, determine the way we attach the data volumes to the data devices, possible values: "reattach" and "onLaunch" (default is onLaunch).
private_ips - (Optional) List of Private IPs to associate to the group instances.(e.g. "172.1.1.0"). Please note: This setting will only apply if persistence.persist_private_ip is set to true
stateful_deallocation {
should_delete_images = false
should_delete_network_interfaces = false
should_delete_volumes = false
should_delete_snapshots = false
}
This allows you to have an autoscaler that preserves volumes and handles the complexities for you.
I'm brand new to Terraform so I'm sure i'm missing something, but the answers i'm finding don't seem to be asking the same question I have.
I have an AWS VPC/Security Group that we need our EC2 instances to be created under and this VPC/SG is already created. To create an EC2 instance, Terraform requires that if I don't have a default VPC, I must import my own. But once I import and apply my plan, when I wish to destroy it, its trying to destroy my VPC as well. How do I encapsulate my resources so when I run "terraform apply", I can create an EC2 instance with my imported VPC, but when I run "terraform destroy" I only destroy my EC2 instance?
In case anyone wants to mention, I understand that:
lifecycle = {
prevent_destroy = true
}
is not what I'm looking for.
Here is my current practice code.
resource "aws_vpc" "my_vpc" {
cidr_block = "xx.xx.xx.xx/24"
}
provider "aws" {
region = "us-west-2"
}
data "aws_ami" "ubuntu" {
most_recent = true
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-bionic-18.04-amd64-server-*"]
}
owners = ["099720109477"] # Canonical
}
resource "aws_instance" "web" {
ami = "${data.aws_ami.ubuntu.id}"
instance_type = "t3.nano"
vpc_security_group_ids = ["sg-0e27d851dxxxxxxxxxx"]
subnet_id = "subnet-0755c2exxxxxxxx"
tags = {
Name = "HelloWorld"
}
}
Terraform should not require you to deploy or import a VPC in order to deploy an EC2 instance into it. You should be able to reference the VPC, subnets and security groups by id so TF is aware of your existing network infrastructure just like you've already done for SGs and subnets. All you should need to deploy the EC2 instance "aws_instance" is give it an existing subnet id in the existing VPC like you already did. Why do you say deploying or importing a VPC is required by Terraform? What error or issue do you have deploying without the VPC and just using the existing one?
You can protect the VPC through AWS if you really wanted to, but I don't think you really want to import the VPC into your Terraform state and let Terraform manage it here. Sounds like you want the VPC to service other resources, maybe applications manually deployed or through other TF stacks, and the VPC to live independent of anyone application deployment.
To answer the original question, you can use a data source and match your VPC by id or tag name :
data "aws_vpc" "main" {
tags = {
Name = "main_vpc"
}
}
Or
data "aws_vpc" "main" {
id = "vpc-nnnnnnnn"
}
Then refer to it with : data.aws_vpc.main
Also, if you already included your VPC but would like not to destroy it while remove it from your state, you can manage to do it with the terraform state command : https://www.terraform.io/docs/commands/state/index.html
I have created a launch configuration which contains AWS CoreOS AMI as the image. This has been attached into AWS Auto Scaling Group. All the above process has been done via Terraform. But when Auto Scaling group tries to create the instance it fails with following error.
StatusMessage: "In order to use this AWS Marketplace product you need to accept terms and subscribe. To do so please visit https://aws.amazon.com/marketplace/pp?sku=ryg425ue2hwnsok9ccfastg4. Launching EC2 instance failed."
It seems like I have to Subscribe to use this CoreOS AMI image, but when I'm creating and instance on AS console, I just select the CoreOS image from market place and continue to other configurations related to instance. But how to achieve this in Terraform? Should I subscribe to AWS CoreOS AMI beforehand or is there a way to bypass this in Terraform?
All the related files and erro trace is given below,
launch-configuration.tf File
resource "aws_launch_configuration" "tomcat-webapps-all" {
name = "tomcat-webapps-all"
image_id = "ami-028e043d0e518a84a"
instance_type = "t2.micro"
key_name = "rnf-sec"
security_groups = ["${aws_security_group.allow-multi-tomcat-webapp-traffic.id}"]
user_data = "${data.ignition_config.webapps.rendered}"
}
auto-scale-group.tf File
resource "aws_autoscaling_group" "tomcat-webapps-all-asg" {
name = "tomcat-webapps-all-asg"
depends_on = ["aws_launch_configuration.tomcat-webapps-all"]
vpc_zone_identifier = ["${aws_default_subnet.default-az1.id}", "${aws_default_subnet.default-az2.id}", "${aws_default_subnet.default-az3.id}"]
max_size = 1
min_size = 0
health_check_grace_period = 300
health_check_type = "EC2"
desired_capacity = 1
force_delete = true
launch_configuration = "${aws_launch_configuration.tomcat-webapps-all.id}"
target_group_arns = ["${aws_lb_target_group.newdasboard-lb-tg.arn}", "${aws_lb_target_group.signup-lb-tg.arn}"]
}
Error Trace
Error: Error applying plan:
1 error(s) occurred:
* aws_autoscaling_group.tomcat-webapps-all-asg: 1 error(s) occurred:
* aws_autoscaling_group.tomcat-webapps-all-asg: "tomcat-webapps-all-asg": Waiting up to 10m0s: Need at least 1 healthy instances in ASG, have 0. Most recent activity: {
ActivityId: "9455ab55-426a-c888-ac95-2d45c78d445a",
AutoScalingGroupName: "tomcat-webapps-all-asg",
Cause: "At 2019-05-20T12:56:29Z an instance was started in response to a difference between desired and actual capacity, increasing the capacity from 0 to 1.",
Description: "Launching a new EC2 instance. Status Reason: In order to use this AWS Marketplace product you need to accept terms and subscribe. To do so please visit https://aws.amazon.com/marketplace/pp?sku=ryg425ue2hwnsok9ccfastg4. Launching EC2 instance failed.",
Details: "{\"Subnet ID\":\"subnet-c650458f\",\"Availability Zone\":\"ap-southeast-1a\"}",
EndTime: 2019-05-20 12:56:30 +0000 UTC,
Progress: 100,
StartTime: 2019-05-20 12:56:30.642 +0000 UTC,
StatusCode: "Failed",
StatusMessage: "In order to use this AWS Marketplace product you need to accept terms and subscribe. To do so please visit https://aws.amazon.com/marketplace/pp?sku=ryg425ue2hwnsok9ccfastg4. Launching EC2 instance failed."
}
If you log into the console and accept the ULA terms once this error will go away when you apply it via terraform.
If I was you I'd log in, go through the whole process to launch an instance with this AMI, terminate it, then apply the terraform.
If somebody is also having the same issue, I was able to solve it by login into my EC2 console with root user and subscribing to AWS CoreOS Product Page on AWS Marketplace.
After that everything worked as expected. The error returned with a web URL to CoreOS product page on AWS Marketplace. Its just a matter of clicking Continue to Subscribe Button.
If above steps didn't work refer this answer - https://stackoverflow.com/a/56222898/4334340
I got the following error from AWS today.
"We currently do not have sufficient m3.large capacity in the Availability Zone you requested (us-east-1a). Our system will be working on provisioning additional capacity. You can currently get m3.large capacity by not specifying an Availability Zone in your request or choosing us-east-1e, us-east-1b."
What does this mean exactly? It sounds like AWS doesn't have the physical resources to allocate me the virtual resources that I need. That seems unbelievable though.
What's the solution? Is there an easy way to change the availability zone of an instance?
Or do I need to create an AMI and restore it in a new availability zone?
This is not a new issue. You cannot change the availability zone. Best option is to create an AMI and relaunch the instance in new AZ, as you have already said. You would have everything in place. If you want to go across regions, see this - http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/CopyingAMIs.html
You can try getting reserved instances, which guarantee you get the instances all the time.
I fixed this eror by fixing my aws_region and availability_zone values. Once I added aws_subnet_ids, error msg showed me exactly which zone my ec2 was being created.
variable "availability_zone" {
default = "ap-southeast-2c"
}
variable "aws_region" {
description = "EC2 Region for the VPC"
default = "ap-southeast-2c"
}
data "aws_vpc" "default" {
default = true
}
data "aws_subnet_ids" "all" {
vpc_id = "${data.aws_vpc.default.id}"
}
resource "aws_instance" "ec2" {
....
subnet_id = "${element(data.aws_subnet_ids.all.ids, 0)}"
availability_zone = "${var.availability_zone}"
}