terraform data source output to file - amazon-web-services

I would like to try if terraform data source will be able to place the output to a text file.
I was looking on it online but not able to find any, I plan to perform on getting the load balancer name and after that our automation script will perform aws-cli command and will use the load balancer name taken by the data-source

If your CLB name is autogenrated by TF, you can save it in a file using local_file:
resource "aws_elb" "clb" {
availability_zones = ["ap-southeast-2a"]
listener {
instance_port = 8000
instance_protocol = "http"
lb_port = 80
lb_protocol = "http"
}
}
resource "local_file" "foo" {
content = <<-EOL
${aws_elb.clb.name}
EOL
filename = "${path.module}/clb_name.txt"
}
output "clb_name" {
value = aws_elb.clb.name
}
But maybe it would be easier to get the output value directly as json:
clb_name=$(terraform output -json clb_name | jq -r)
echo ${clb_name}

Related

Terraform 14 template_file and null_resource issue

I'm trying to use null resource using local-exec-provisioner for enabling the s3 bucket logging on load balancer using the template file. Both of the terraform file and template file (lb-to-s3-log.tpl) are on same directory "/modules/lb-to-s3-log" however getting an error. Terraform file looks this way:
data "template_file" "lb-to-s3-log" {
template = file(".//modules/lb-to-s3-log/lb-to-s3-log.tpl")
vars = {
X_INFO1 = var.INFO1
X_INFO2 = var.INFO2
X_INFO3 = var.INFO3
}
}
resource "null_resource" "lb-to-s3-log" {
provisioner "local-exec" {
command = "aws elb modify-load-balancer-attributes --load-balancer-name ${var.LOAD_BALANCER_NAME[0]} --load-balancer-attributes ${data.template_file.lb-to-s3-log.rendered}"
}
}
WHERE:
var.INFO1 = test1
var.INFO2 = test2
var.INFO3 = test3
AND TEMPLATE (TPL) FILE CONTAINS:
{
"AccessLog": {
"Enabled": true,
"S3BucketName": "${X_INFO1}-${X_INFO2}-${X_INFO3}-logs",
"EmitInterval": 5,
"S3BucketPrefix": "${X_INFO1}-${X_INFO2}-${X_INFO3}-logs"
}
}
ERROR IM GETTING:
Error: Error running command 'aws elb modify-load-balancer-attributes --load-balancer-name awseb-e-5-AWSEBLoa-ABCDE0FGHI0V --load-balancer-attributes {
"AccessLog": {
"Enabled": true,
"S3BucketName": "test1-test2-test3-logs",
"EmitInterval": 5,
"S3BucketPrefix": "test1-test2-test3-logs"
}
}
': exit status 2. Output:
Error parsing parameter '--load-balancer-attributes': Invalid JSON: Expecting property name enclosed in double quotes: line 1 column 2 (char 1)
JSON received: {
/bin/sh: line 1: AccessLog:: command not found
/bin/sh: line 2: Enabled:: command not found
/bin/sh: line 3: S3BucketName:: command not found
/bin/sh: line 4: EmitInterval:: command not found
/bin/sh: line 5: S3BucketPrefix:: command not found
/bin/sh: -c: line 6: syntax error near unexpected token `}'
/bin/sh: -c: line 6: ` }'
ISSUE / PROBLEM:
The template file successfully updates the variable assignments (X_INFO1, X_INFO2, X_INFO23). Seems like the issue is on the ${data.template_file.lb-to-s3-log.rendered} of the aws cli command.
Same error when I tried to substitute the file from lb-s3log.tpl to lb-s3log.json.
I'm using Terraform v0.14, I followed the process of enabling s3 bucket for log storage of amazon classic load balancer from this documentation
The error is happening because you need to format the JSON to be escaped on the command line or to write the JSON as a file and then use file:// to refer to it.
Wrapping your JSON in single quotes should be enough to escape the shell issues:
data "template_file" "lb-to-s3-log" {
template = file(".//modules/lb-to-s3-log/lb-to-s3-log.tpl")
vars = {
X_INFO1 = var.INFO1
X_INFO2 = var.INFO2
X_INFO3 = var.INFO3
}
}
resource "null_resource" "lb-to-s3-log" {
provisioner "local-exec" {
command = "aws elb modify-load-balancer-attributes --load-balancer-name ${var.LOAD_BALANCER_NAME[0]} --load-balancer-attributes '${data.template_file.lb-to-s3-log.rendered}'"
}
}
You can use the local_file resource to render a file if you'd prefer that option:
data "template_file" "lb-to-s3-log" {
template = file(".//modules/lb-to-s3-log/lb-to-s3-log.tpl")
vars = {
X_INFO1 = var.INFO1
X_INFO2 = var.INFO2
X_INFO3 = var.INFO3
}
}
resource "local_file" "elb_attributes" {
content = data.template_file.lb-to-s3-log.rendered
filename = "${path.module}/elb-attributes.json"
}
resource "null_resource" "lb-to-s3-log" {
provisioner "local-exec" {
command = "aws elb modify-load-balancer-attributes --load-balancer-name ${var.LOAD_BALANCER_NAME[0]} --load-balancer-attributes file://${local_file.elb_attributes.filename}"
}
}
A better alternative here though, unless there's something fundamental preventing it, would be to have Terraform managing the ELB access logs by using the access_logs parameter to the resource:
resource "aws_elb" "bar" {
name = "foobar-terraform-elb"
availability_zones = ["us-west-2a", "us-west-2b", "us-west-2c"]
access_logs {
bucket = "foo"
bucket_prefix = "bar"
interval = 60
}
}
You might also want to consider moving to Application Load Balancers or possibly Network Load Balancers depending on your usage as ELBs are a deprecated service.
Finally, it's also worth noting that the template_file data source is deprecated since 0.12 and the templatefile function is preferred instead.

How to execute PowerShell command through Terraform

I am trying to create a Windows Ec2 instance from AMI and executing a powershell command on that as :
data "aws_ami" "ec2-worker-initial-encrypted-ami" {
filter {
name = "tag:Name"
values = ["ec2-worker-initial-encrypted-ami"]
}
}
resource "aws_instance" "my-test-instance" {
ami = "${data.aws_ami.ec2-worker-initial-encrypted-ami.id}"
instance_type = "t2.micro"
tags {
Name = "my-test-instance"
}
provisioner "local-exec" {
command = "C:\\ProgramData\\Amazon\\EC2-Windows\\Launch\\Scripts\\InitializeInstance.ps1 -Schedule",
interpreter = ["PowerShell"]
}
}
and I am facing following error :
aws_instance.my-test-instance: Error running command 'C:\ProgramData\Amazon\EC2-Windows\Launch\Scripts\InitializeInstance.ps1
-Schedule': exit status 1. Output: The term 'C:\ProgramData\Amazon\EC2-Windows\Launch\Scripts\InitializeInstance.ps1'
is not recognized as the name of a cmdlet, function, script file, or
operable program. Check the spelling of the name, or if a path was
included, verify that the path is correct and try again. At line:1
char:72
C:\ProgramData\Amazon\EC2-Windows\Launch\Scripts\InitializeInstance.ps1
<<<< -Schedule
CategoryInfo : ObjectNotFound: (C:\ProgramData...izeInstance.ps1:String) [],
CommandNotFoundException
FullyQualifiedErrorId : CommandNotFoundException
You are using a local-exec provisioner which runs the request powershell code on the workstation running Terraform:
The local-exec provisioner invokes a local executable after a resource
is created. This invokes a process on the machine running Terraform,
not on the resource.
It sounds like you want to execute the powershell script on the resulting instance in which case you'll need to use a remote-exec provisioner which will run your powershell on the target resource:
The remote-exec provisioner invokes a script on a remote resource
after it is created. This can be used to run a configuration
management tool, bootstrap into a cluster, etc.
You will also need to include connection details, for example:
provisioner "remote-exec" {
command = "C:\\ProgramData\\Amazon\\EC2-Windows\\Launch\\Scripts\\InitializeInstance.ps1 -Schedule",
interpreter = ["PowerShell"]
connection {
type = "winrm"
user = "Administrator"
password = "${var.admin_password}"
}
}
Which means this instance must also be ready to accept WinRM connections.
There are other options for completing this task though. Such as using userdata, which Terraform also supports. This might look like the following example:
Example of using a userdata file in Terraform
File named userdata.txt:
<powershell>
C:\\ProgramData\\Amazon\\EC2-Windows\\Launch\\Scripts\\InitializeInstance.ps1 -Schedule
</powershell>
Launch instance using the userdata file:
resource "aws_instance" "my-test-instance" {
ami = "${data.aws_ami.ec2-worker-initial-encrypted-ami.id}"
instance_type = "t2.micro"
tags {
Name = "my-test-instance"
}
user_data = "${file(userdata.txt)}"
}
The file interpolation will read the contents of the userdata file as string to pass to userdata for the instance launch. Once the instance launches it should run the script as you expect.
What Brian is claiming is correct, you will get "invalid or unknown key: interpreter" error.
To correctly run powershell you will need to run it as following, based on Brandon's answer:
provisioner "remote-exec" {
connection {
type = "winrm"
user = "Administrator"
password = "${var.admin_password}"
}
inline = [
"powershell -ExecutionPolicy Unrestricted -File C:\\ProgramData\\Amazon\\EC2-Windows\\Launch\\Scripts\\InitializeInstance.ps1 -Schedule"
]
}
Edit
To copy the files over to the machine use the below:
provisioner "file" {
source = "${path.module}/some_path"
destination = "C:/some_path"
connection {
host = "${azurerm_network_interface.vm_nic.private_ip_address}"
timeout = "3m"
type = "winrm"
https = true
port = 5986
use_ntlm = true
insecure = true
#cacert = "${azurerm_key_vault_certificate.vm_cert.certificate_data}"
user = var.admin_username
password = var.admin_password
}
}
Update:
Currently provisioners are not recommended by hashicorp, full instructions and explanation (it is long) can be found at: terraform.io/docs/provisioners/index.html
FTR: Brandon's answer is correct, except the example code provided for the remote-exec includes keys that are unsupported by the provisioner.
Neither command nor interpreter are supported keys.
https://www.terraform.io/docs/provisioners/remote-exec.html

aws_elb terraform error failed to load root config module

Here's the block of code for aws_elb from main.tf.
resource "aws_elb" "terraformelb" {
name = "terraformelb"
subnets = ["${aws_subnet.public_subnet.id}"]
security_groups = ["${aws_security_group.web_sg.id}"]
instances = ["${aws_instance.web_*.id}"]
listener {
instance_port = 80
instance_protocol = "http"
lb_port = 80
lb_protocol = "http"
}
}
I have followed terraform syntax and I still get the error
Failed to load root config module: Error loading C:\Users\snadella001\Downloads\Terraform\repo\main.tf: Error reading config for aws_elb[terraform-elb]: parse error at 1:21: expected expression but found "."
The error message is for resource terraform-elb (a minus in the name)
But your resource name is terraformelb
You need to make sure the names are same.
Looks like your instances section is wrong, it should look something like this, I'm guessing (not being able to see the rest of your code):
instances = ["${aws_instance.web.*.id}"]

Terraform: Mount volume

According to documentation, using terraform, I'm able to create a droplet on digital ocean:
resource "digitalocean_volume" "foobar" {
region = "nyc1"
name = "baz"
size = 100
description = "an example volume"
}
So, I'm also able to add a volume to it:
resource "digitalocean_droplet" "foobar" {
name = "baz"
size = "1gb"
image = "coreos-stable"
region = "nyc1"
volume_ids = ["${digitalocean_volume.foobar.id}"]
}
I'd like to know how to mount this on a desired location.
I need to mount it automatically. I mean, when droplet is up I need to the volume is mounted. I was thinking about using chef...
Any ideas?
To mount the volume automatically, you can use user_data via cloud init to run a script as follow:
This is how your digitalocean_droplet resources should reflect:
resource "digitalocean_droplet" "foobar" {
name = "baz"
size = "1gb"
image = "coreos-stable"
region = "nyc1"
volume_ids = ["${digitalocean_volume.foobar.id}"]
# user data
user_data = "${data.template_cloudinit_config.cloudinit-example.rendered}"
}
Then your cloud.init file that contains the cloudinit_config should be as bellow. It will reference the shell script in ${TERRAFORM_HOME}/scripts/disk.sh that would mount your volume automatically:
provider "cloudinit" {}
data "template_file" "shell-script" {
template = "${file("scripts/disk.sh")}"
}
data "template_cloudinit_config" "cloudinit-example" {
gzip = false
base64_encode = false
part {
content_type = "text/x-shellscript"
content = "${data.template_file.shell-script.rendered}"
}
}
The shell script to mount the volume automatically on startup is in ${TERRAFORM_HOME}/scripts/disk.sh
It will first check if a file system exist. If true it wouldn't format the disk if not it will
#!/bin/bash
DEVICE_FS=`blkid -o value -s TYPE ${DEVICE}`
if [ "`echo -n $DEVICE_FS`" == "" ] ; then
mkfs.ext4 ${DEVICE}
fi
mkdir -p /data
echo '${DEVICE} /data ext4 defaults 0 0' >> /etc/fstab
mount /data
I hope this helps
Mounting the volume needs to be done from the guest OS itself using mount, fstab, etc.
The digital ocean docs cover this here.
Using Chef you could use resource_mount to mount it in an automated fashion.
The device name will be /dev/disk/by-id/scsi-0DO_Volume_YOUR_VOLUME_NAME. So, using the example from the Terraform docs, it would be /dev/disk/by-id/scsi-0DO_Volume_baz.

Fail to use terraform provisioner with aws lightsail

I am having trouble to use the provisioner (both "file" and "remote-exec") with aws lightsail. For the "file" provisioner, I kept getting a dialup error to port 22 with connection refused, the "remote-exec" gives me a timeout error. I can see it keeps trying to connect to the instance but it just can not connect to it.
For the file provisioner, I have also tried with scp directly and it works just fine.
A sample snippet of the connection block I am using is as the following:
resource "aws_lightsail_instance" "han-mongo" {
name = "han-mongo"
availability_zone = "us-east-1b"
blueprint_id = "ubuntu_16_04"
bundle_id = "nano_1_0"
key_pair_name = "my_key_pair"
user_data = "${file("userdata.sh")}"
provisioner "file" {
source = "file.service"
destination = "/home/ubuntu"
connection {
type = "ssh"
private_key = "${file("my_key.pem")}"
user = "ubuntu"
timeout = "20s"
}
}
}
In addition to the authentication information, it's also necessary to tell Terraform which IP address it should use to connect, like this:
connection {
type = "ssh"
host = "${self.public_ip_address}"
private_key = "${file("my_key.pem")}"
user = "ubuntu"
timeout = "20s"
}
For some resources Terraform is able to automatically infer some of the connection details from the resource attributes, but at present that is not supported for Lightsail instances and so it's necessary to specify the host argument explicitly.