I'm trying to launch a spot instance inside a VPC using Terraform.
I had a working aws_instance setup, and just changed it to aws_spot_instance_request, but I always get this error:
* aws_spot_instance_request.machine: Error requesting spot instances: InvalidParameterCombination: VPC security groups may not be used for a non-VPC launch
status code: 400, request id: []
My .tf file looks like this:
provider "aws" {
access_key = "${var.access_key}"
secret_key = "${var.secret_key}"
region = "${var.region}"
}
resource "template_file" "userdata" {
filename = "${var.userdata}"
vars {
domain = "${var.domain}"
name = "${var.name}"
}
}
resource "aws_spot_instance_request" "machine" {
ami = "${var.amiPuppet}"
key_name = "${var.key}"
instance_type = "c3.4xlarge"
subnet_id = "${var.subnet}"
vpc_security_group_ids = [ "${var.securityGroup}" ]
user_data = "${template_file.userdata.rendered}"
wait_for_fulfillment = true
spot_price = "${var.price}"
tags {
Name = "${var.name}.${var.domain}"
Provider = "Terraform"
}
}
resource "aws_route53_record" "machine" {
zone_id = "${var.route53ZoneId}"
name = "${aws_spot_instance_request.machine.tags.Name}"
type = "A"
ttl = "300"
records = ["${aws_spot_instance_request.machine.private_ip}"]
}
I don't understand why it isn't working...
The documentation stands that spot_instance_request supports all parameters of aws_instance, so, I just changed a working aws_instance to spot_instance_request (with the addition of the price)... am I doing something wrong?
I originally opened this as an issue in Terraform repo, but no one replied me.
It's a bug in terraform, seems to be fixed in master.
https://github.com/hashicorp/terraform/issues/1339
Related
I have 2 windows server AMIs :
the first one acts as a client (since as per my research, there is no windows 10 enterprise AMIs in AWS) and the second AMI has an AD.
I would like to create a terraform script that automatically creates EC2 instances from the AMIs, the script should also configure AD with domain and then make the windows member of domain.
Is that possible? if not, would that be possible to achieve using a user data script?
It's not very clear what exactly you need but the 2 resources you will need for sure are:
1.The ec2 resource is aws_instance and is configured like:
resource "aws_instance" "my-ec2" {
ami = "ami-058b1b7fe545997ae"
instance_type = "t2.micro"
subnet_id = "your-subnet-id"
availability_zone = "your-availability_zone"
vpc_security_group_ids = "your-security-group"
tags = {
Name = "me"
Environment = "dev"
}
}
The AD resource is aws_directory_service_directory
resource "aws_directory_service_directory" "my-ad" {
name = "corp.notexample.com"
password = "SuperSecretPassw0rd"
size = "Small"
vpc_settings {
vpc_id = "my-vpc-id"
subnet_ids = [aws_subnet.foo.id, aws_subnet.bar.id]
}
connect_settings {
customer_dns_ips = ["A.B.C.D"]
customer_username = "Admin"
subnet_ids = [aws_subnet.foo.id, aws_subnet.bar.id]
vpc_id = "my-vpc-id"
}
tags = {
Name = "me"
Environment = "dev"
}
}
Currently I have a module that is used as a template to create a lots of EC2 in AWS. So using this template with volume_tags, I should expect that for all the EBS volumes created along with the EC2 would got the same tags.
However, the issue is that after I created the EC2 using this Terraform script, in some occasion I'll need to mount a few more EBS volume to the EC2, and those volumes will got different tags (e.g. the Name tag is volume_123).
After mounting the volume to the EC2 in AWS web console, I try to run terraform init and terraform plan again, and it tells me that there are changes need to apply, as the volume_tags of the EC2 created appear 'replaced' the original Name volume_tags. Example output would be this:
#module.ec2_2.aws_instance.ec2 will be updated in-place
~ resource "aws_instance" "ec2" {
id = "i-99999999999999999"
~ volume_tags = {
~ "Name" = "volume_123" -> "ec22"
}
}
When I was reading the documentation of Terraform provider aws, I understand that the volume_tags should only apply when the instance is created. However, it seems that even after creation it will still try to align the tags of EBS volume attached to the EC2. As I need to keep those newly attached volume with a different set of tags then the root and EBS volume attached when the EC2 is created (different AMI has different number of block devices), is that I should avoid using volume_tags to give the volumes tag at creation? And if not using it what should I do instead?
The following are the codes:
terraform_folder/modules/ec2_template/main.tf
resource "aws_instance" "ec2" {
ami = var.ami
availability_zone = var.availability_zone
instance_type = var.instance_type
tags = merge(map("Name", var.name), var.tags)
volume_tags = merge(map("Name", var.name), var.tags)
}
terraform_folder/deployment/machines.tf
module "ec2_1" {
source = "../modules/ec2_template"
name = "ec21"
ami = local.ec2_ami_1["a"]
instance_type = local.ec2_instance_type["app"]
tags = merge(
map(
"Role", "app",
),
local.tags_default
)
}
module "ec2_2" {
source = "../modules/ec2_template"
name = "ec22"
ami = local.ec2_ami_2["b"]
instance_type = local.ec2_instance_type["app"]
tags = merge(
map(
"Role", "app",
),
local.tags_default
)
}
module "ec2_3" {
source = "../modules/ec2_template"
name = "ec23"
ami = local.ec2_ami_1["a"]
instance_type = local.ec2_instance_type["app"]
tags = merge(
map(
"Role", "app",
),
local.tags_default
)
}
terraform_folder/deployment/locals.tf
locals {
ec2_ami_1 = {
a = "ami-11111111111111111"
b = "ami-22222222222222222"
}
ec2_ami_2 = {
a = "ami-33333333333333333"
b = "ami-44444444444444444"
}
ec2_ami_3 = {
a = "ami-55555555555555555"
b = "ami-66666666666666666"
}
tags_default = {
Terraform = "true"
Environment = "test"
Application = "app"
BackupFrequency = "2"
}
}
You shouldn't be modifying resources managed by TF manually using AWS Console. This lead to resource drift and issues you are experiencing.
Nevertheless, you can use lifecycle Meta-Argument to tell TF to ignore changes to your tags:
resource "aws_instance" "ec2" {
ami = var.ami
availability_zone = var.availability_zone
instance_type = var.instance_type
tags = merge(map("Name", var.name), var.tags)
volume_tags = merge(map("Name", var.name), var.tags)
lifecycle {
ignore_changes = [volume_tags]
}
}
I'm trying to get a documentdb cluster up and running and have it running from within a private subnet I have created.
Running the config below without the depends_on i get the following error message as the subnet hasn't been created:
Error: error creating DocDB cluster: DBSubnetGroupNotFoundFault: DB subnet group 'subnet-0b97a3f5bf6db758f' does not exist.
status code: 404, request id: 59b75d23-50a4-42f9-99a3-367af58e6e16
Added the depends on setup to wait for the subnet to be created but are running into an issue.
cluster_identifier = "my-docdb-cluster"
engine = "docdb"
master_username = "myusername"
master_password = "mypassword"
backup_retention_period = 5
preferred_backup_window = "07:00-09:00"
skip_final_snapshot = true
apply_immediately = true
db_subnet_group_name = aws_subnet.eu-west-3a-private
depends_on = [aws_subnet.eu-west-3a-private]
}
On running terraform apply I an getting an error on the config:
Error: error creating DocDB cluster: DBSubnetGroupNotFoundFault: DB subnet group 'subnet-0b97a3f5bf6db758f' does not exist.
status code: 404, request id: 8b992d86-eb7f-427e-8f69-d05cc13d5b2d
on main.tf line 230, in resource "aws_docdb_cluster" "docdb":
230: resource "aws_docdb_cluster" "docdb"
A DB subnet group is a logical resource in itself that tells AWS where it may schedule a database instance in a VPC. It is not referring to the subnets directly which is what you're trying to do there.
To create a DB subnet group you should use the aws_db_subnet_group resource. You then refer to it by name directly when creating database instances or clusters.
A basic example would look like this:
resource "aws_vpc" "example" {
cidr_block = "10.0.0.0/16"
}
resource "aws_subnet" "eu-west-3a" {
vpc_id = aws_vpc.example.id
availability_zone = "a"
cidr_block = "10.0.1.0/24"
tags = {
AZ = "a"
}
}
resource "aws_subnet" "eu-west-3b" {
vpc_id = aws_vpc.example.id
availability_zone = "b"
cidr_block = "10.0.2.0/24"
tags = {
AZ = "b"
}
}
resource "aws_db_subnet_group" "example" {
name = "main"
subnet_ids = [
aws_subnet.eu-west-3a.id,
aws_subnet.eu-west-3b.id
]
tags = {
Name = "My DB subnet group"
}
}
resource "aws_db_instance" "example" {
allocated_storage = 20
storage_type = "gp2"
engine = "mysql"
engine_version = "5.7"
instance_class = "db.t2.micro"
name = "mydb"
username = "foo"
password = "foobarbaz"
parameter_group_name = "default.mysql5.7"
db_subnet_group_name = aws_db_subnet_group.example.name
}
The same thing applies to Elasticache subnet groups which use the aws_elasticache_subnet_group resource.
It's also worth noting that adding depends_on to a resource that already references the dependent resource via interpolation does nothing. The depends_on meta parameter is for resources that don't expose a parameter that would provide this dependency information directly only.
It seems value in parameter is wrong. db_subnet_group_name created somewhere else gives the output id/arn. So u need to use id value. although depends_on clause looks okie.
db_subnet_group_name = aws_db_subnet_group.eu-west-3a-private.id
So that would be correct/You can try to use arn in place of id.
Thanks,
Ashish
In my application I am using AWS autoscaling group using terraform. I launch an Autoscaling group giving it a number of instances in a region. But Since, only 20 are instances allowed in a region. I want to launch an autoscaling group that will create instances across multiple regions so that I can launch multiple. I had this configuration:
# ---------------------------------------------------------------------------------------------------------------------
# THESE TEMPLATES REQUIRE TERRAFORM VERSION 0.8 AND ABOVE
# ---------------------------------------------------------------------------------------------------------------------
terraform {
required_version = ">= 0.9.3"
}
provider "aws" {
access_key = "${var.aws_access_key}"
secret_key = "${var.aws_secret_key}"
region = "us-east-1"
}
provider "aws" {
alias = "us-west-1"
region = "us-west-1"
}
provider "aws" {
alias = "us-west-2"
region = "us-west-2"
}
provider "aws" {
alias = "eu-west-1"
region = "eu-west-1"
}
provider "aws" {
alias = "eu-central-1"
region = "eu-central-1"
}
provider "aws" {
alias = "ap-southeast-1"
region = "ap-southeast-1"
}
provider "aws" {
alias = "ap-southeast-2"
region = "ap-southeast-2"
}
provider "aws" {
alias = "ap-northeast-1"
region = "ap-northeast-1"
}
provider "aws" {
alias = "sa-east-1"
region = "sa-east-1"
}
resource "aws_launch_configuration" "launch_configuration" {
name_prefix = "${var.asg_name}-"
image_id = "${var.ami_id}"
instance_type = "${var.instance_type}"
associate_public_ip_address = true
key_name = "${var.key_name}"
security_groups = ["${var.security_group_id}"]
user_data = "${data.template_file.user_data_client.rendered}"
lifecycle {
create_before_destroy = true
}
}
# ---------------------------------------------------------------------------------------------------------------------
# CREATE AN AUTO SCALING GROUP (ASG)
# ---------------------------------------------------------------------------------------------------------------------
resource "aws_autoscaling_group" "autoscaling_group" {
name = "${var.asg_name}"
max_size = "${var.max_size}"
min_size = "${var.min_size}"
desired_capacity = "${var.desired_capacity}"
launch_configuration = "${aws_launch_configuration.launch_configuration.name}"
vpc_zone_identifier = ["${data.aws_subnet_ids.default.ids}"]
lifecycle {
create_before_destroy = true
}
tag {
key = "Environment"
value = "production"
propagate_at_launch = true
}
tag {
key = "Name"
value = "clj-${var.job_id}-instance"
propagate_at_launch = true
}
}
# ---------------------------------------------------------------------------------------------------------------------
# THE USER DATA SCRIPT THAT WILL RUN ON EACH CLIENT NODE WHEN IT'S BOOTING
# ---------------------------------------------------------------------------------------------------------------------
data "template_file" "user_data_client" {
template = "${file("./user-data-client.sh")}"
vars {
company_location_job_id = "${var.job_id}"
docker_login_username = "${var.docker_login_username}"
docker_login_password = "${var.docker_login_password}"
}
}
# ---------------------------------------------------------------------------------------------------------------------
# DEPLOY THE CLUSTER IN THE DEFAULT VPC AND SUBNETS
# Using the default VPC and subnets makes this example easy to run and test, but it means Instances are
# accessible from the public Internet. In a production deployment, we strongly recommend deploying into a custom VPC
# and private subnets.
# ---------------------------------------------------------------------------------------------------------------------
data "aws_subnet_ids" "default" {
vpc_id = "${var.vpc_id}"
}
But this configuration does not work, it is only launching instances in a single region and throwing error as they reach 20.
How can we create instances across multiple regions in an autoscaling group ?
You correctly instantiate multiple aliased providers, but are not using any of them.
If you really need to create resources in different regions from one configuration, you must pass the alias of the provider to the resource:
resource "aws_autoscaling_group" "autoscaling_group_eu-central-1" {
provider = "aws.eu-central-1"
}
And repeat this block as many times as needed (or, better, extract it into a module and pass the providers to module.
But, as mentioned in a comment, if all you want to achieve is to have more than 20 instances, you can increase your limit by opening a ticket with AWS support.
After rds and elastic cache are created in terraform,
I would like to adjust the priority so that ec2 is set up.
Is this feasible with terraform?
to be precise, I am running docker on ec2. I would like to pass the endpoint of elastic cache, RDS created by terraform to docker with environment variables.
Thank you for reading my question.
It is feasible with terraform's Implicit and Explicit Dependencies.
So, you can define which resource should be created first and which one is after.
It is supported by the following construction, which takes list of resources:
depends_on = [
"", "",
]
Here is an example:
resource "aws_db_instance" "rds_example" {
allocated_storage = 10
storage_type = "gp2"
engine = "mysql"
engine_version = "5.6.17"
instance_class = "db.t1.micro"
name = "mydb"
username = "foo"
password = "bar"
db_subnet_group_name = "my_database_subnet_group"
parameter_group_name = "default.mysql5.6"
}
resource "aws_instance" "ec2_example" {
ami = "${data.aws_ami.ubuntu.id}"
instance_type = "t2.micro"
tags {
Name = "HelloWorld"
}
depends_on = [
"aws_db_instance.rds_example",
]
}