Rephrasing the title, how to not have to guess the EC2 attached volume/device's name in generated aws_instance's user_data? That is, how to not be forced to have an additional attached_device_actual_name variable in below Terraform locals?
Here's the relevant Terraform configuration:
locals {
attached_device_name = "/dev/sdf" # Used in `aws_volume_attachment`.
attached_device_actual_name = "/dev/nvme1n1" # Used in `templatefile`.
}
resource "aws_instance" "foo" {
user_data = templatefile("./user-data.sh.tpl", {
attached_device_name = local.attached_device_actual_name
})
}
resource "aws_volume_attachment" "foo" {
device_name = local.attached_device_name
instance_id = aws_instance.foo.id
}
The docs say
The device names that you specify for NVMe EBS volumes in a block device mapping are renamed using NVMe device names (/dev/nvme[0-26]n1).
Does the above-quoted part "device names [...] are renamed" also imply that one should not use these, reserved /dev/nvme... names? Indeed, if I set local.attached_device_name to /dev/nvme1n1, which is a "correct" guess in this case, this error pops up:
Error: Error attaching volume (vol-some_id) to instance (i-some_id), message: "Value (/dev/nvme1n1) for parameter device is invalid. /dev/nvme1n1 is not a valid EBS device name.", code: "InvalidParameterValue"
"/dev/nvme1n1 is not a valid EBS device name."
The goal was to have user_data synced with the attached-volume's name and then be able to wait for the volume:
DEVICE="${attached_device_name}"
while [ ! -e "$${DEVICE}" ] ; do
echo "Waiting for $DEVICE ..."
sleep 1
done
Env:
Terraform 1.1.2
hashicorp/aws 4.10.0
Does anyone have experience with Yarn node labels on AWS EMR? If so you please share your thoughts. We want to run All the Spark executors on Task(Spot) machine and all the Spark ApplicationMaster/Driver on Core(on-Demand) machine. Previously we were running Spark executors and Spark Driver all on the CORE machine(on-demand).
In order to achieve this, we have created the "TASK" yarn node label as a part of a custom AWS EMR Bootstrap action. And Have mapped the same "TASK" yarn label when any Spot instance is registered with AWS EMR in a separate bootstrap action. As "CORE" is the default yarn node label expression, so we are simply mapping it with an on-demand instance upon registration of the node in the bootstrap action.
We are using "spark.yarn.executor.nodeLabelExpression": "TASK" spark conf to launch spark executors on Task nodes.
So.. we are facing the problem of the wrong mapping of the Yarn node label with the appropriate machine i.e For a short duration of time(around 1-2 mins) the "TASK" yarn node label is mapped with on-demand instances and "CORE" yarn node label is mapped with spot instance. So During this short duration of wrong labeling Yarn launches Spark executors on On-demand instances and Spark drivers on Spot instances.
This wrong mapping of labels with corresponding machine type persists till the bootstrap actions are complete and after that, the mapping is automatically resolved to its correct state.
The script we are running as a part of the bootstrap action:
This script is run on all new machines to assign a label to that machine. The script is being run as a background PID as the yarn is only available after all custom bootstrap actions are completed
#!/usr/bin/env bash
set -ex
function waitTillYarnComesUp() {
IS_YARN_EXIST=$(which yarn | grep -i yarn | wc -l)
while [ $IS_YARN_EXIST != '1' ]
do
echo "Yarn not exist"
sleep 15
IS_YARN_EXIST=$(which yarn | grep -i yarn | wc -l)
done
echo "Yarn exist.."
}
function waitTillTaskLabelSyncs() {
LABEL_EXIST=$(yarn cluster --list-node-labels | grep -i TASK | wc -l)
while [ $LABEL_EXIST -eq 0 ]
do
sleep 15
LABEL_EXIST=$(yarn cluster --list-node-labels | grep -i TASK | wc -l)
done
}
function getHostInstanceTypeAndApplyLabel() {
HOST_IP=$(curl http://169.254.169.254/latest/meta-data/local-hostname)
echo "host ip is ${HOST_IP}"
INSTANCE_TYPE=$(curl http://169.254.169.254/latest/meta-data/instance-life-cycle)
echo "instance type is ${INSTANCE_TYPE}"
PORT_NUMBER=8041
spot="spot"
onDemand="on-demand"
if [ $INSTANCE_TYPE == $spot ]; then
yarn rmadmin -replaceLabelsOnNode "${HOST_IP}:${PORT_NUMBER}=TASK"
elif [ $INSTANCE_TYPE == $onDemand ]
then
yarn rmadmin -replaceLabelsOnNode "${HOST_IP}:${PORT_NUMBER}=CORE"
fi
}
waitTillYarnComesUp
# holding for resource manager sync
sleep 100
waitTillTaskLabelSyncs
getHostInstanceTypeAndApplyLabel
exit 0
yarn rmadmin -addToClusterNodeLabels "TASK(exclusive=false)"
This command is being run on the Master instance to create a new TASK yarn node label at the time of cluster creation.
Does anyone have clue to prevent this wrong mapping of labels?
I would like to propose the next:
Create every node with some default label, like LABEL_PENDING. You can do it using the EMR classifications;
In the bootstrap script, you should identify if the current node is On-Demand or Spot instance;
After that, on every node you should update change LABEL_PENDING in /etc/hadoop/conf/yarn-site.xml to ON_DEMAND or SPOT;
On the master node, you should add 3 labels to YARN: LABEL_PENDING, ON_DEMAND, and SPOT.
Example of EMR Classifications:
[
{
"classification": "yarn-site",
"properties": {
"yarn.node-labels.enabled": "true",
"yarn.node-labels.am.default-node-label-expression": "ON_DEMAND",
"yarn.nodemanager.node-labels.provider.configured-node-partition": "LABEL_PENDING"
},
"configurations": []
},
{
"classification": "capacity-scheduler",
"properties": {
"yarn.scheduler.capacity.root.accessible-node-labels.ON_DEMAND.capacity": "100",
"yarn.scheduler.capacity.root.accessible-node-labels.SPOT.capacity": "100",
"yarn.scheduler.capacity.root.default.accessible-node-labels.ON_DEMAND.capacity": "100",
"yarn.scheduler.capacity.root.default.accessible-node-labels.SPOT.capacity": "100"
},
"configurations": []
},
{
"classification": "spark-defaults",
"properties": {
"spark.yarn.am.nodeLabelExpression": "ON_DEMAND",
"spark.yarn.executor.nodeLabelExpression": "SPOT"
},
"configurations": []
}
]
Example of the additional part to your bootstrap script
yarnNodeLabelConfig="yarn.nodemanager.node-labels.provider.configured-node-partition"
yarnSiteXml="/etc/hadoop/conf/yarn-site.xml"
function waitForYarnConfIsReady() {
while [[ ! -e $yarnSiteXml ]]; do
sleep 2
done
IS_CONF_PRESENT_IN_FILE=$(grep $yarnNodeLabelConfig $yarnSiteXml | wc -l)
while [[ $IS_CONF_PRESENT_IN_FILE != "1" ]]
do
echo "Yarn conf file doesn't have properties"
sleep 2
IS_CONF_PRESENT_IN_FILE=$(grep $yarnNodeLabelConfig $yarnSiteXml | wc -l)
done
}
function updateLabelInYarnConf() {
INSTANCE_TYPE=$(curl http://169.254.169.254/latest/meta-data/instance-life-cycle)
echo "Instance type is $INSTANCE_TYPE"
if [[ $INSTANCE_TYPE == "spot" ]]; then
sudo sed -i 's/>LABEL_PENDING</>SPOT</' $yarnSiteXml
elif [[ $INSTANCE_TYPE == "on-demand" ]]
then
sudo sed -i 's/>LABEL_PENDING</>ON_DEMAND</' $yarnSiteXml
fi
}
waitForYarnConfIsReady
updateLabelInYarnConf
When I try to run a Python script to build an AMI using snapshot it says:
botocore.exceptions.ClientError: An error occurred (InvalidBlockDeviceMapping) when calling the RegisterImage operation: No root snapshot specified in device mapping.
When I check everything is right. I don't find any root snapshot details in EBS.
BlockDeviceMappings=[
{
'DeviceName': '/dev/sdb',
'Ebs': {
'SnapshotId': destination_snapshot_id
},
},
],
EnaSupport=True,
Name="jenkins-slave-" + str(int(time.time())),
VirtualizationType='hvm',
RootDeviceName='/dev/sda1'
)
RootDeviceName must match one of the DeviceNames in the BlockDeviceMappings[].
In Kanthi K's case, /dev/sda1 does not match /dev/sdb.
I have an ENI created, and I need to attach it as a secondary ENI to my EC2 instance dynamically using cloud formation. As I am using red hat AMI, I have to go ahead and manually configure RHEL which includes steps as mentioned in below post.
Manually Configuring secondary Elastic network interface on Red hat ami- 7.5
Can someone please tell me how to automate all of this using cloud formation. Is there a way to do all of it using user data in a cloud formation template? Also, I need to make sure that the configurations remain even if I reboot my ec2 instance (currently the configurations get deleted after reboot.)
Though it's not complete automation but you can do below to make sure that the ENI comes up after every reboot of your ec2 instance (only for RHEL instances). If anyone has any better suggestion, kindly share.
vi /etc/systemd/system/create.service
Add below content
[Unit]
Description=XYZ
After=network.target
[Service]
ExecStart=/usr/local/bin/my.sh
[Install]
WantedBy=multi-user.target
Change permissions and enable the service
chmod a+x /etc/systemd/system/create.service
systemctl enable /etc/systemd/system/create.service
Below shell script does the configuration on rhel for ENI
vi /usr/local/bin/my.sh
add below content
#!/bin/bash
my_eth1=`curl http://169.254.169.254/latest/meta-data/network/interfaces/macs/0e:3f:96:77:bb:f8/local-ipv4s/`
echo "this is the value--" $my_eth1 "hoo"
GATEWAY=`ip route | awk '/default/ { print $3 }'`
printf "NETWORKING=yes\nNOZEROCONF=yes\nGATEWAYDEV=eth0\n" >/etc/sysconfig/network
printf "\nBOOTPROTO=dhcp\nDEVICE=eth1\nONBOOT=yes\nTYPE=Ethernet\nUSERCTL=no\n" >/etc/sysconfig/network-scripts/ifcfg-eth1
ifup eth1
ip route add default via $GATEWAY dev eth1 tab 2
ip rule add from $my_eth1/32 tab 2 priority 600
Start the service
systemctl start create.service
You can check if the script ran fine or not by --
journalctl -u create.service -b
Still need to figure out the joining of the secondary ENI from Linux, but this was the Python script I wrote to have the instance find the corresponding ENI and attach it to itself. Basically the script works by taking a predefined naming tag for both the ENI and Instance, then joins the two together.
Pre-reqs for setting this up are:
IAM role on the instance to allow access to S3 bucket where script is stored
Install pip and the AWS CLI in the user data section
curl -O https://bootstrap.pypa.io/get-pip.py
python get-pip.py
pip install awscli --upgrade
aws configure set default.region YOUR_REGION_HERE
pip install boto3
sleep 180
Note on sleep 180 command: I have my ENI swap out on instance in an autoscaling group. This allows an extra 3 min for the other instance to shut down and drop the ENI, so the new one can pick it up. May or may not be necessary for your use case.
AWS CLI command in user data to download the file onto the instance (example below)
aws s3api get-object --bucket YOURBUCKETNAME --key NAMEOFOBJECT.py /home/ec2-user/NAMEOFOBJECT.py
# coding: utf-8
import boto3
import sys
import time
client = boto3.client('ec2')
# Get the ENI ID
eni = client.describe_network_interfaces(
Filters=[
{
'Name': 'tag:Name',
'Values': ['Put the name of your ENI tag here']
},
]
)
eni_id = eni['NetworkInterfaces'][0]['NetworkInterfaceId']
# Get ENI status
eni_status = eni['NetworkInterfaces'][0]['Status']
print('Current Status: {}\n'.format(eni_status))
# Detach if in use
if eni_status == 'in-use':
eni_attach_id = eni['NetworkInterfaces'][0]['Attachment']['AttachmentId']
eni_detach = client.detach_network_interface(
AttachmentId=eni_attach_id,
DryRun=False,
Force=False
)
print(eni_detach)
# Wait until ENI is available
print('start\n-----')
while eni_status != 'available':
print('checking...')
eni_state = client.describe_network_interfaces(
Filters=[
{
'Name': 'tag:Name',
'Values': ['Put the name of your ENI tag here']
},
]
)
eni_status = eni_state['NetworkInterfaces'][0]['Status']
print('ENI is currently: ' + eni_status + '\n')
if eni_status != 'available':
time.sleep(10)
print('end')
# Get the instance ID
instance = client.describe_instances(
Filters=[
{
'Name': 'tag:Name',
'Values': ['Put the tag name of your instance here']
},
{
'Name': 'instance-state-name',
'Values': ['running']
}
]
)
instance_id = instance['Reservations'][0]['Instances'][0]['InstanceId']
# Attach the ENI
response = client.attach_network_interface(
DeviceIndex=1,
DryRun=False,
InstanceId=instance_id,
NetworkInterfaceId=eni_id
)
Given the Datomic Cloudformation template (described here and here), I can deploy a Datomic instance in AWS. I can also use Terraform to automate this.
Using Terraform, how do we put a load balancer in front of the instance in that instance in the Cloudformation template?
Using Terraform, how do we put a Route53 domain name in front of the Datomic instance (or load balancer) in the Cloudformation template?
The Datomic Cloudformation template looks like this:
cf.json
{"Resources":
{"LaunchGroup":
{"Type":"AWS::AutoScaling::AutoScalingGroup",
"Properties":
{"MinSize":{"Ref":"GroupSize"},
"Tags":
[{"Key":"Name",
"Value":{"Ref":"AWS::StackName"},
"PropagateAtLaunch":"true"}],
"MaxSize":{"Ref":"GroupSize"},
"AvailabilityZones":{"Fn::GetAZs":""},
"LaunchConfigurationName":{"Ref":"LaunchConfig"}}},
"LaunchConfig":
{"Type":"AWS::AutoScaling::LaunchConfiguration",
"Properties":
{"ImageId":
{"Fn::FindInMap":
["AWSRegionArch2AMI", {"Ref":"AWS::Region"},
{"Fn::FindInMap":
["AWSInstanceType2Arch", {"Ref":"InstanceType"}, "Arch"]}]},
"UserData":
{"Fn::Base64":
{"Fn::Join":
["\n",
["exec > >(tee \/var\/log\/user-data.log|logger -t user-data -s 2>\/dev\/console) 2>&1",
{"Fn::Join":["=", ["export XMX", {"Ref":"Xmx"}]]},
{"Fn::Join":["=", ["export JAVA_OPTS", {"Ref":"JavaOpts"}]]},
{"Fn::Join":
["=",
["export DATOMIC_DEPLOY_BUCKET",
{"Ref":"DatomicDeployBucket"}]]},
{"Fn::Join":
["=", ["export DATOMIC_VERSION", {"Ref":"DatomicVersion"}]]},
"cd \/datomic", "cat <<EOF >aws.properties",
"host=`curl http:\/\/169.254.169.254\/latest\/meta-data\/local-ipv4`",
"alt-host=`curl http:\/\/169.254.169.254\/latest\/meta-data\/public-ipv4`",
"aws-dynamodb-region=us-east-1\naws-transactor-role=datomic-aws-transactor-10\naws-peer-role=datomic-aws-peer-10\nprotocol=ddb\nmemory-index-max=256m\nport=4334\nmemory-index-threshold=32m\nobject-cache-max=128m\nlicense-key=\naws-dynamodb-table=your-system-name",
"EOF", "chmod 744 aws.properties",
"AWS_ACCESS_KEY_ID=\"${DATOMIC_READ_DEPLOY_ACCESS_KEY_ID}\" AWS_SECRET_ACCESS_KEY=\"${DATOMIC_READ_DEPLOY_AWS_SECRET_KEY}\" aws s3 cp \"s3:\/\/${DATOMIC_DEPLOY_BUCKET}\/${DATOMIC_VERSION}\/startup.sh\" startup.sh",
"chmod 500 startup.sh", ".\/startup.sh"]]}},
"InstanceType":{"Ref":"InstanceType"},
"InstanceMonitoring":{"Ref":"InstanceMonitoring"},
"SecurityGroups":{"Ref":"SecurityGroups"},
"IamInstanceProfile":{"Ref":"InstanceProfile"},
"BlockDeviceMappings":
[{"DeviceName":"\/dev\/sdb", "VirtualName":"ephemeral0"}]}}},
"Mappings":
{"AWSInstanceType2Arch":
{"m3.large":{"Arch":"64h"},
"c4.8xlarge":{"Arch":"64h"},
"t2.2xlarge":{"Arch":"64h"},
"c3.large":{"Arch":"64h"},
"hs1.8xlarge":{"Arch":"64h"},
"i2.xlarge":{"Arch":"64h"},
"r4.4xlarge":{"Arch":"64h"},
"m1.small":{"Arch":"64p"},
"m4.large":{"Arch":"64h"},
"m4.xlarge":{"Arch":"64h"},
"c3.8xlarge":{"Arch":"64h"},
"m1.xlarge":{"Arch":"64p"},
"cr1.8xlarge":{"Arch":"64h"},
"m4.10xlarge":{"Arch":"64h"},
"i3.8xlarge":{"Arch":"64h"},
"m3.2xlarge":{"Arch":"64h"},
"r4.large":{"Arch":"64h"},
"c4.xlarge":{"Arch":"64h"},
"t2.medium":{"Arch":"64h"},
"t2.xlarge":{"Arch":"64h"},
"c4.large":{"Arch":"64h"},
"c3.2xlarge":{"Arch":"64h"},
"m4.2xlarge":{"Arch":"64h"},
"i3.2xlarge":{"Arch":"64h"},
"m2.2xlarge":{"Arch":"64p"},
"c4.2xlarge":{"Arch":"64h"},
"cc2.8xlarge":{"Arch":"64h"},
"hi1.4xlarge":{"Arch":"64p"},
"m4.4xlarge":{"Arch":"64h"},
"i3.16xlarge":{"Arch":"64h"},
"r3.4xlarge":{"Arch":"64h"},
"m1.large":{"Arch":"64p"},
"m2.4xlarge":{"Arch":"64p"},
"c3.4xlarge":{"Arch":"64h"},
"r3.large":{"Arch":"64h"},
"c4.4xlarge":{"Arch":"64h"},
"r3.xlarge":{"Arch":"64h"},
"m2.xlarge":{"Arch":"64p"},
"r4.16xlarge":{"Arch":"64h"},
"t2.large":{"Arch":"64h"},
"m3.xlarge":{"Arch":"64h"},
"i2.4xlarge":{"Arch":"64h"},
"r4.8xlarge":{"Arch":"64h"},
"i3.large":{"Arch":"64h"},
"r3.8xlarge":{"Arch":"64h"},
"c1.medium":{"Arch":"64p"},
"r4.2xlarge":{"Arch":"64h"},
"i2.8xlarge":{"Arch":"64h"},
"m3.medium":{"Arch":"64h"},
"r3.2xlarge":{"Arch":"64h"},
"m1.medium":{"Arch":"64p"},
"i3.4xlarge":{"Arch":"64h"},
"m4.16xlarge":{"Arch":"64h"},
"i3.xlarge":{"Arch":"64h"},
"r4.xlarge":{"Arch":"64h"},
"c1.xlarge":{"Arch":"64p"},
"t1.micro":{"Arch":"64p"},
"c3.xlarge":{"Arch":"64h"},
"i2.2xlarge":{"Arch":"64h"},
"t2.small":{"Arch":"64h"}},
"AWSRegionArch2AMI":
{"ap-northeast-1":{"64p":"ami-eb494d8c", "64h":"ami-81f7cde6"},
"ap-northeast-2":{"64p":"ami-6eb66a00", "64h":"ami-f594489b"},
"ca-central-1":{"64p":"ami-204bf744", "64h":"ami-5e5be73a"},
"us-east-2":{"64p":"ami-5b42643e", "64h":"ami-896c4aec"},
"eu-west-2":{"64p":"ami-e52d3a81", "64h":"ami-55091e31"},
"us-west-1":{"64p":"ami-97cbebf7", "64h":"ami-442a0a24"},
"ap-southeast-1":{"64p":"ami-db1492b8", "64h":"ami-3e90165d"},
"us-west-2":{"64p":"ami-daa5c6ba", "64h":"ami-cb5030ab"},
"eu-central-1":{"64p":"ami-f3f02b9c", "64h":"ami-d564bcba"},
"us-east-1":{"64p":"ami-7f5f1e69", "64h":"ami-da5110cc"},
"eu-west-1":{"64p":"ami-66001700", "64h":"ami-77465211"},
"ap-southeast-2":{"64p":"ami-32cbdf51", "64h":"ami-66647005"},
"ap-south-1":{"64p":"ami-82126eed", "64h":"ami-723c401d"},
"sa-east-1":{"64p":"ami-afd7b9c3", "64h":"ami-ab9af4c7"}}},
"Parameters":
{"InstanceType":
{"Description":"Type of EC2 instance to launch",
"Type":"String",
"Default":"c3.large"},
"InstanceProfile":
{"Description":"Preexisting IAM role \/ instance profile",
"Type":"String",
"Default":"datomic-aws-transactor-10"},
"Xmx":
{"Description":"Xmx setting for the JVM",
"Type":"String",
"AllowedPattern":"\\d+[GgMm]",
"Default":"2625m"},
"GroupSize":
{"Description":"Size of machine group",
"Type":"String",
"Default":"1"},
"InstanceMonitoring":
{"Description":"Detailed monitoring for store instances?",
"Type":"String",
"Default":"true"},
"JavaOpts":
{"Description":"Options passed to Java launcher",
"Type":"String",
"Default":""},
"SecurityGroups":
{"Description":"Preexisting security groups.",
"Type":"CommaDelimitedList",
"Default":"datomic"},
"DatomicDeployBucket":
{"Type":"String",
"Default":"deploy-a0dbc565-faf2-4760-9b7e-29a8e45f428e"},
"DatomicVersion":{"Type":"String", "Default":"0.9.5561.50"}},
"Description":"Datomic Transactor Template"}
samples/cf-template.properties
#################################################################
# AWS instance and group settings
#################################################################
# required
# AWS instance type. See http://aws.amazon.com/ec2/instance-types/ for
# a list of legal instance types.
aws-instance-type=c3.large
# required, see http://docs.amazonwebservices.com/general/latest/gr/rande.html#ddb_region
aws-region=us-east-1
# required
# Enable detailed monitoring of AWS instances.
aws-instance-monitoring=true
# required
# Set group size >1 to create a standby pool for High Availability.
aws-autoscaling-group-size=1
# required, default = 70% of AWS instance RAM
# Passed to java launcher via -Xmx
java-xmx=
#################################################################
# Java VM options
#
# If you set the java-opts property, it will entirely replace the
# value used by bin/transactor, which you should consult as a
# starting point if you are configuring GC.
#
# Note that the single-quoting is necessary due to the whitespace
# between options.
#################################################################
# java-opts='-XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+CMSParallelRemarkEnabled -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly'
#################################################################
# security settings
#
# You must specify at least one of aws-ingress-grops or
# aws-ingress-cidrs to allows peers to connect!
#################################################################
# required
# The transactor needs to run in a security group that opens the
# transactor port to legal peers. If you specify a security group,
# `bin/transactor ensure-cf ...` will ensure that security group
# allows ingress on the transactor port.
aws-security-group=datomic
# Comma-delimited list of security groups. Security group syntax:
# group-name or aws-account-id:group-name
aws-ingress-groups=datomic
# Comma-delimited list of CIDRS.
# aws-ingress-cidrs=0.0.0.0/0
#################################################################
# datomic deployment settings
#################################################################
# required, default = VERSION number of Datomic you deploy from
# Which Datomic version to run.
datomic-version=
# required
# download Datomic from this bucket on startup. You typically will not change this.
datomic-deploy-s3-bucket=some-value
Unless you can't easily avoid it, I wouldn't recommend mixing Cloudformation with Terraform because it's going to make it a pain to do a lot of things. Normally I'd only recommend it for things such as the rare occurrences that Cloudformation covers a resource but not Terraform.
If you do need to do this you should be in luck because your Cloudformation template adds a tag to the autoscaling group with your instance(s) in that you can use to then link a load balancer to the autoscaling group and have the instances attach themselves to the load balancer as they are created (and detach when they are being deleted).
Unfortunately the Cloudformation template doesn't simply output the autoscaling group name so you'll probably need to do this in two separate terraform apply actions (probably keeping the configuration in separate folders).
Assuming something like this for your Cloudformation stack:
resource "aws_cloudformation_stack" "datomic" {
name = "datomic-stack"
...
}
Then a minimal example looks something like this:
data "aws_autoscaling_groups" "datomic" {
filter {
name = "key"
values = ["AWS::StackName"]
}
filter {
name = "value"
values = ["datomic-stack"]
}
}
resource "aws_lb_target_group" "datomic" {
name = "datomic-lb-tg"
port = 80
protocol = "HTTP"
vpc_id = "${var.vpc_id}"
}
resource "aws_lb" "datomic" {
name = "datomic-lb"
internal = false
security_groups = ["${var.security_group_id}"]
subnets = ["${var.subnet_id"]
}
resource "aws_autoscaling_attachment" "asg_attachment" {
autoscaling_group_name = "${data.aws_autoscaling_groups.datomic.names[0]}"
alb_target_group_arn = "${aws_alb_target_group.datomic.arn}"
}
resource "aws_lb_listener" "datomic" {
load_balancer_arn = "${aws_lb.datomic.arn}"
port = "80"
protocol = "HTTP"
default_action {
target_group_arn = "${aws_lb_target_group.datomic.arn}"
type = "forward"
}
}
The above config will find the autoscaling group created by the Cloudformation template and then attach it to an application load balancer that listens for HTTP traffic and forwards HTTP traffic to the Datomic instances.
It's trivial from here to add a Route53 record to the load balancer but because your instances are in an autoscaling group you can't easily add Route53 records for these instances (and probably shouldn't need to).