I have 2 windows server AMIs :
the first one acts as a client (since as per my research, there is no windows 10 enterprise AMIs in AWS) and the second AMI has an AD.
I would like to create a terraform script that automatically creates EC2 instances from the AMIs, the script should also configure AD with domain and then make the windows member of domain.
Is that possible? if not, would that be possible to achieve using a user data script?
It's not very clear what exactly you need but the 2 resources you will need for sure are:
1.The ec2 resource is aws_instance and is configured like:
resource "aws_instance" "my-ec2" {
ami = "ami-058b1b7fe545997ae"
instance_type = "t2.micro"
subnet_id = "your-subnet-id"
availability_zone = "your-availability_zone"
vpc_security_group_ids = "your-security-group"
tags = {
Name = "me"
Environment = "dev"
}
}
The AD resource is aws_directory_service_directory
resource "aws_directory_service_directory" "my-ad" {
name = "corp.notexample.com"
password = "SuperSecretPassw0rd"
size = "Small"
vpc_settings {
vpc_id = "my-vpc-id"
subnet_ids = [aws_subnet.foo.id, aws_subnet.bar.id]
}
connect_settings {
customer_dns_ips = ["A.B.C.D"]
customer_username = "Admin"
subnet_ids = [aws_subnet.foo.id, aws_subnet.bar.id]
vpc_id = "my-vpc-id"
}
tags = {
Name = "me"
Environment = "dev"
}
}
Related
Currently I have a module that is used as a template to create a lots of EC2 in AWS. So using this template with volume_tags, I should expect that for all the EBS volumes created along with the EC2 would got the same tags.
However, the issue is that after I created the EC2 using this Terraform script, in some occasion I'll need to mount a few more EBS volume to the EC2, and those volumes will got different tags (e.g. the Name tag is volume_123).
After mounting the volume to the EC2 in AWS web console, I try to run terraform init and terraform plan again, and it tells me that there are changes need to apply, as the volume_tags of the EC2 created appear 'replaced' the original Name volume_tags. Example output would be this:
#module.ec2_2.aws_instance.ec2 will be updated in-place
~ resource "aws_instance" "ec2" {
id = "i-99999999999999999"
~ volume_tags = {
~ "Name" = "volume_123" -> "ec22"
}
}
When I was reading the documentation of Terraform provider aws, I understand that the volume_tags should only apply when the instance is created. However, it seems that even after creation it will still try to align the tags of EBS volume attached to the EC2. As I need to keep those newly attached volume with a different set of tags then the root and EBS volume attached when the EC2 is created (different AMI has different number of block devices), is that I should avoid using volume_tags to give the volumes tag at creation? And if not using it what should I do instead?
The following are the codes:
terraform_folder/modules/ec2_template/main.tf
resource "aws_instance" "ec2" {
ami = var.ami
availability_zone = var.availability_zone
instance_type = var.instance_type
tags = merge(map("Name", var.name), var.tags)
volume_tags = merge(map("Name", var.name), var.tags)
}
terraform_folder/deployment/machines.tf
module "ec2_1" {
source = "../modules/ec2_template"
name = "ec21"
ami = local.ec2_ami_1["a"]
instance_type = local.ec2_instance_type["app"]
tags = merge(
map(
"Role", "app",
),
local.tags_default
)
}
module "ec2_2" {
source = "../modules/ec2_template"
name = "ec22"
ami = local.ec2_ami_2["b"]
instance_type = local.ec2_instance_type["app"]
tags = merge(
map(
"Role", "app",
),
local.tags_default
)
}
module "ec2_3" {
source = "../modules/ec2_template"
name = "ec23"
ami = local.ec2_ami_1["a"]
instance_type = local.ec2_instance_type["app"]
tags = merge(
map(
"Role", "app",
),
local.tags_default
)
}
terraform_folder/deployment/locals.tf
locals {
ec2_ami_1 = {
a = "ami-11111111111111111"
b = "ami-22222222222222222"
}
ec2_ami_2 = {
a = "ami-33333333333333333"
b = "ami-44444444444444444"
}
ec2_ami_3 = {
a = "ami-55555555555555555"
b = "ami-66666666666666666"
}
tags_default = {
Terraform = "true"
Environment = "test"
Application = "app"
BackupFrequency = "2"
}
}
You shouldn't be modifying resources managed by TF manually using AWS Console. This lead to resource drift and issues you are experiencing.
Nevertheless, you can use lifecycle Meta-Argument to tell TF to ignore changes to your tags:
resource "aws_instance" "ec2" {
ami = var.ami
availability_zone = var.availability_zone
instance_type = var.instance_type
tags = merge(map("Name", var.name), var.tags)
volume_tags = merge(map("Name", var.name), var.tags)
lifecycle {
ignore_changes = [volume_tags]
}
}
I have started learning about terraform recently and wanted to create an environment using the above stated settings. When I run the below code I get 2 resources deployed one is beanstalk and other is Auto Scaling group(ASG) the ASG has the desired settings but is not linked with the beanstalk . Hence I am trying to Connect these two.
(I copy the beanstalk Id form the Tags section then head over to ASG under EC2 and search for the same and look at the health Check section)
resource "aws_autoscaling_group" "example" {
launch_configuration = aws_launch_configuration.as_conf.id
min_size = 2
max_size = 10
availability_zones = [ "us-east-1a" ]
health_check_type = "ELB"
health_check_grace_period = 1500
tag {
key = "Name"
value = "terraform-asg-example"
propagate_at_launch = true
}
}
provider "aws" {
region = "us-east-1"
}
resource "aws_elastic_beanstalk_application" "application" {
name = "Test-app"
}
resource "aws_elastic_beanstalk_environment" "environment" {
name = "Test-app"
application = aws_elastic_beanstalk_application.application.name
solution_stack_name = "64bit Windows Server Core 2019 v2.5.6 running IIS 10.0"
setting {
namespace = "aws:autoscaling:launchconfiguration"
name = "IamInstanceProfile"
value = "aws-elasticbeanstalk-ec2-role"
}
setting {
namespace = "aws:autscaling"
}
}
resource "aws_launch_configuration" "as_conf" {
name = "web_config_shivanshu"
image_id = "ami-2757f631"
instance_type = "t2.micro"
lifecycle {
create_before_destroy = true
}
}
You do not create an ASG or launch config/template outside of the Elastic Beanstalk environment and join them together. As there are config options which are not available. For example GP3 SSD is available as part of a launch template, but not available as part of elastic beanstalk yet
What you want to do is remove the resources of
resource "aws_launch_configuration" "as_conf"
resource "aws_autoscaling_group" "example"
Then utilise the setting {} block a lot more within resource "aws_elastic_beanstalk_environment" "environment"
Here is a list of all the settings you can describe in the settings block (https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options-general.html)
So I got it how we can change the Auto-scaling group(ASG) of the beanstalk we have created using terraform. First of all,create the beanstalk according to your setting. we use the Setting block in beanstalk resource and Namespace for configuring it according to our need.
Step-1
Create a beanstalk using terraform
resource "aws_elastic_beanstalk_environment" "test"
{ ...
...
}
Step-2
After you have created the beanstalk.Create autoscaling resource skeleton. the ASG associated with the beanstalk will be handled by terraform under this resource block.Import using the id of ASG that you can get from either terraform plan/show
terraform import aws_autoscaling_group.<Name that you give> asg-id
Step-3
After you have done that Change the beanstalk according to your need
then make sure you have added these to tags.Because sometimes I have noticed that the mapping of this ASG to the beanstalk is is lost.
tag {
key = "elasticbeanstalk:environment-id"
propagate_at_launch = true
value = aws_elastic_beanstalk_environment.<Name of your beanstalk>.id
}
tag {
key = "elasticbeanstalk:environment-name"
propagate_at_launch = true
value = aws_elastic_beanstalk_environment.<Name of your beanstalk>.name
}
After rds and elastic cache are created in terraform,
I would like to adjust the priority so that ec2 is set up.
Is this feasible with terraform?
to be precise, I am running docker on ec2. I would like to pass the endpoint of elastic cache, RDS created by terraform to docker with environment variables.
Thank you for reading my question.
It is feasible with terraform's Implicit and Explicit Dependencies.
So, you can define which resource should be created first and which one is after.
It is supported by the following construction, which takes list of resources:
depends_on = [
"", "",
]
Here is an example:
resource "aws_db_instance" "rds_example" {
allocated_storage = 10
storage_type = "gp2"
engine = "mysql"
engine_version = "5.6.17"
instance_class = "db.t1.micro"
name = "mydb"
username = "foo"
password = "bar"
db_subnet_group_name = "my_database_subnet_group"
parameter_group_name = "default.mysql5.6"
}
resource "aws_instance" "ec2_example" {
ami = "${data.aws_ami.ubuntu.id}"
instance_type = "t2.micro"
tags {
Name = "HelloWorld"
}
depends_on = [
"aws_db_instance.rds_example",
]
}
I have a Terraform script that launches VPC, subnets, database, autoscaling and some other stuff. Autoscaling uses default Windows Server 2012 R2 images to launch new instances (including the initial ones). Every instance is executing Chef install after launch. I need to log into the instance so i can confirm that Chef is installed but i dont have any .pem keys. How do i launch an instance with Autoscaling and launch_configuration and output .pem file so i can login afterwards?
Here is my autoscaling part of the script:
resource "aws_autoscaling_group" "asgPrimary" {
depends_on = ["aws_launch_configuration.primary"]
availability_zones = ["${data.aws_availability_zones.available.names[0]}"]
name = "TerraformASGPrimary"
max_size = 1
min_size = 1
wait_for_capacity_timeout = "0"
health_check_grace_period = 300
health_check_type = "ELB"
desired_capacity = 1
force_delete = false
wait_for_capacity_timeout = "0"
vpc_zone_identifier = ["${aws_subnet.private_primary.id}"]
#placement_group = "${aws_placement_group.test.id}"
launch_configuration = "${aws_launch_configuration.primary.name}"
load_balancers = ["${aws_elb.elb.name}"]
}
and this is my launch configuration:
resource "aws_launch_configuration" "primary" {
depends_on = ["aws_subnet.primary"]
name = "web_config_primary"
image_id = "${data.aws_ami.amazon_windows_2012R2.id}"
instance_type = "${var.ami_type}"
security_groups = ["${aws_security_group.primary.id}"]
user_data = "${template_file.user_data.rendered}"
}
I need to avoid using Amazon CLI or the webpage itself - the point is all that to be automated for reusing in all my other solutions.
The .pem files used to RDS/SSH into an EC2 instance are not generated during launch of an EC2 instance. It may appear like this when using the AWS Management Console, but in actuality, the Key Pair is generated first, and then that Key Pair is assigned to the EC2 instance during launch.
To get your .pem file, first:
Generate a new Key Pair. See Amazon EC2 Key Pairs. When you do this, you will be able to download the .pem file.
Assign that Key Pair to your Auto Scaling Group's launch configuration using the key_name argument.
Here's an example:
resource "aws_launch_configuration" "primary" {
depends_on = ["aws_subnet.primary"]
name = "web_config_primary"
image_id = "${data.aws_ami.amazon_windows_2012R2.id}"
instance_type = "${var.ami_type}"
security_groups = ["${aws_security_group.primary.id}"]
user_data = "${template_file.user_data.rendered}",
key_name = "my-key-pair"
}
See: https://www.terraform.io/docs/providers/aws/r/launch_configuration.html#key_name
I'm trying to launch a spot instance inside a VPC using Terraform.
I had a working aws_instance setup, and just changed it to aws_spot_instance_request, but I always get this error:
* aws_spot_instance_request.machine: Error requesting spot instances: InvalidParameterCombination: VPC security groups may not be used for a non-VPC launch
status code: 400, request id: []
My .tf file looks like this:
provider "aws" {
access_key = "${var.access_key}"
secret_key = "${var.secret_key}"
region = "${var.region}"
}
resource "template_file" "userdata" {
filename = "${var.userdata}"
vars {
domain = "${var.domain}"
name = "${var.name}"
}
}
resource "aws_spot_instance_request" "machine" {
ami = "${var.amiPuppet}"
key_name = "${var.key}"
instance_type = "c3.4xlarge"
subnet_id = "${var.subnet}"
vpc_security_group_ids = [ "${var.securityGroup}" ]
user_data = "${template_file.userdata.rendered}"
wait_for_fulfillment = true
spot_price = "${var.price}"
tags {
Name = "${var.name}.${var.domain}"
Provider = "Terraform"
}
}
resource "aws_route53_record" "machine" {
zone_id = "${var.route53ZoneId}"
name = "${aws_spot_instance_request.machine.tags.Name}"
type = "A"
ttl = "300"
records = ["${aws_spot_instance_request.machine.private_ip}"]
}
I don't understand why it isn't working...
The documentation stands that spot_instance_request supports all parameters of aws_instance, so, I just changed a working aws_instance to spot_instance_request (with the addition of the price)... am I doing something wrong?
I originally opened this as an issue in Terraform repo, but no one replied me.
It's a bug in terraform, seems to be fixed in master.
https://github.com/hashicorp/terraform/issues/1339