i want data "template_file" in below terraform code to execute after provisioner "file" (basically ansible playbook) is copied to the ec2 instance. I am not able to successfully use "depends_on" in this scenario. Can some one please help me how can i achieve this? below is the sample code snippet.
resource "aws_eip" "opendj-source-ami-eip" {
instance = "${aws_instance.opendj-source-ami-server.id}"
vpc = true
connection {
host = "${aws_eip.opendj-source-ami-eip.public_ip}"
user = "ubuntu"
timeout = "3m"
agent = false
private_key = "${file(var.private_key)}"
}
provisioner "file" {
source = "./${var.copy_password_file}"
destination = "/home/ubuntu/${var.copy_password_file}"
}
provisioner "file" {
source = "./${var.ansible_playbook}"
destination = "/home/ubuntu/${var.ansible_playbook}"
}
}
data "template_file" "run-ansible-playbooks" {
template = <<-EOF
#!/bin/bash
ansible-playbook /home/ubuntu/${var.copy_password_file} && ansible-playbook /home/ubuntu/${var.ansible_playbook}
EOF
#depends_on = ["<< not sure what to put here>>"]
}
The correct format for depends_on is pegged to the resource as a whole; so the format in your case would look like:
data "template_file" "run-ansible-playbooks" {
template = <<-EOF
#!/bin/bash
ansible-playbook /home/ubuntu/${var.copy_password_file} && ansible-playbook /home/ubuntu/${var.ansible_playbook}
EOF
depends_on = ["aws_eip.opendj-source-ami-eip"]
}
Related
I have been hanging around with this problem for some time now and I can't solve it.
I'm launching an EC2 instance that runs a bash script and installs a few things.
At the same time, I am also launching an RDS instance, but I need to be able to pass the value from the RDS endpoint to the EC2 instance to configure the connection.
I'm trying to do this using templatefile, like this
resource "aws_rds_cluster_instance" "cluster_instances" {
count = 1
identifier = "rds-prod-ddbb-${count.index}"
cluster_identifier = aws_rds_cluster.default.id
instance_class = "db.r5.large"
engine = "aurora"
engine_version = "5.6.mysql_aurora.1.22.5"
publicly_accessible = "true"
}
resource "aws_rds_cluster" "default" {
cluster_identifier = "aws-rds-ddbb-cluster"
availability_zones = ["us-west-2b"]
db_subnet_group_name = "default-vpc-003d3ab296c"
skip_final_snapshot = "true"
backup_retention_period = 30
vpc_security_group_ids = [aws_security_group.ddbb.id]
}
data "template_file" "RDSs" {
template = file("init.sh")
vars = {
rds = aws_rds_cluster.default.endpoint
}
depends_on = [
aws_rds_cluster.default,
aws_rds_cluster_instance.cluster_instances,
]
}
resource "aws_instance" "web_01" {
ami = "ami-0477c9562acb09"
instance_type = "t2.micro"
subnet_id = "subnet-0d0558d99ec3cd3"
key_name = "web-01"
user_data_base64 = base64encode(data.template_file.RDSs.rendered)
vpc_security_group_ids = [aws_security_group.ddbb.id]
ebs_block_device {
device_name = "/dev/sda1"
volume_type = "gp2"
volume_size = 20
}
tags = {
Name = "Web01"
}
depends_on = [
aws_rds_cluster.default,
aws_rds_cluster_instance.cluster_instances,
]
}
And then, my init.sh is like this:
#!/bin/bash
echo "rds = $rds" > /var/tmp/rds
But I get nothing in /var/tmp/rds, so it looks like the variable $rds is empty.
Your help will be greatly appreciated.
Ps: I have outputs configured like this:
outputs.tf
output "rds_endpoint" {
value = aws_rds_cluster.default.endpoint
}
And that is working fine, when the apply is complete, it shows me the value of rds endpoint.
The variable is not a shell variable but a templated variable — so terraform will parse the file, regardless of its type and replace terraform variables in the said file.
Knowing this, $rds is not a terraform variable interpolation, while ${rds} is.
So, your bash script should rather be:
#!/bin/bash
echo "rds = ${rds}" > /var/tmp/rds
Since Terraform 0.12 There is a more elegant way of passing variable to user_data
Instead of using the data "template_file" prefer using the new templatefile function in terraform
locals {
WEB_SERVER_UNAME = "your_username"
WEB_SERVER_PASS = "your_password"
}
resource "aws_instance" "web_01" {
....
user_data_base64 = base64encode("${templatefile("${path.module}/user_data_script.sh", {
WEB_SERVER_UNAME = local.WEB_SERVER_UNAME
WEB_SERVER_PASS = local.WEB_SERVER_PASS
})}")
....
}
By using $rds you are refering to variables inside your shell script or env vars, that is the reason why it is displaying nothing.
To use the template variables you need to interpolate in the following way ${variable}
Refer this for further details :- https://www.terraform.io/language/expressions/strings#string-templates
i have a script file named as auto.sh in my local laptop and i want to run this script file in GCP machine as soon as provision
i created this terraform file
resource "google_compute_attached_disk" "default3" {
disk = google_compute_disk.default2.id
instance = google_compute_instance.default.id
}
resource "google_compute_instance" "default" {
name = "test"
machine_type = "custom-8-16384"
zone = "us-central1-a"
tags = ["foo", "bar"]
boot_disk {
initialize_params {
image = "centos-cloud/centos-7"
}
}
network_interface {
network = "default"
access_config {
}
}
metadata_startup_script = "touch abcd.txt"
lifecycle {
ignore_changes = [attached_disk]
}
}
resource "google_compute_disk" "default2" {
name = "test-disk"
type = "pd-balanced"
zone = "us-central1-a"
image = "centos-7-v20210609"
size = 100
}
and this is working fine now i want to run that script
You should replace the
metadata_startup_script = "touch abcd.txt"
with either you script inline if it's short enough, or with something like
metadata_startup_script = "${file("/path/to/your/file")}"
To load it from a file
See metadata_startup_script docs
hay folks ,
I want to run a script in gcp machine for that i created a resource below file
disk = google_compute_disk.default2.id
instance = google_compute_instance.default.id
} # aatach disk to vm
resource "google_compute_firewall" "firewall" {
name = "gritfy-firewall-externalssh"
network = "default"
allow {
protocol = "tcp"
ports = ["22"]
}
source_ranges = ["0.0.0.0/0"]
target_tags = ["externalssh"]
} # allow ssh
resource "google_compute_address" "static" {
name = "vm-public-address"
project = "fit-visitor-305606"
region = "asia-south1"
depends_on = [ google_compute_firewall.firewall ]
} # reserve ip
resource "google_compute_instance" "default" {
name = "new"
machine_type = "custom-8-16384"
zone = "asia-south1-a"
tags = ["foo", "bar"]
boot_disk {
initialize_params {
image = "centos-cloud/centos-7"
}
}
network_interface {
network = "default"
access_config {
nat_ip = google_compute_address.static.address
}
}
metadata = {
ssh-keys = "${var.user}:${file(var.publickeypath)}"
}
lifecycle {
ignore_changes = [attached_disk]
}
provisioner "file" {
source = "autoo.sh"
destination = "/tmp/autoo.sh"
}
provisioner "remote-exec" {
connection {
host = google_compute_address.static.address
type = "ssh"
user = var.user
timeout = "500s"
private_key = file(var.privatekeypath)
}
inline = [
"sudo yum -y install epel-release",
"sudo yum -y install nginx",
"sudo nginx -v",
]
}
} # Create VM
resource "google_compute_disk" "default2" {
name = "test-disk"
type = "pd-balanced"
zone = "asia-south1-a"
image = "centos-7-v20210609"
size = 100
} # Create Disk
using this I am able to create VM and disk and also able to attach vm to disk but not able to run my script
error log are =
and private key part is working fine the key is assign to VM and I try to connect with that key it is connected may the problem with the provision part only
any help or guidance would be really helpful...
Like error message says, you need connection configuration for provisioner. Also you need remote-exec provisoner for running scripts.
provisioner "file" {
source = "autoo.sh"
destination = "/tmp/autoo.sh"
connection {
type = "ssh"
user = var.user
private_key = file(var.privatekeypath)
}
}
provisioner "remote-exec" {
inline = [
"chmod +x /tmp/autoo.sh",
"cd /tmp",
"./autoo.sh"
]
connection {
type = "ssh"
user = var.user
private_key = file(var.privatekeypath)
}
source: https://stackoverflow.com/a/36668395/5454632
I'm trying to work with aws_instance data source. I created a simple configuration which should create an ec2 instance and should return ip as output
variable "default_port" {
type = string
default = 8080
}
provider "aws" {
region = "us-west-2"
shared_credentials_file = "/Users/kharandziuk/.aws/creds"
profile = "prototyper"
}
resource "aws_instance" "example" {
ami = "ami-0994c095691a46fb5"
instance_type = "t2.small"
tags = {
name = "example"
}
}
data "aws_instances" "test" {
instance_tags = {
name = "example"
}
instance_state_names = ["pending", "running", "shutting-down", "terminated", "stopping", "stopped"]
}
output "ip" {
value = data.aws_instances.test.public_ips
}
but for some reasons I can't configure data source properly. The result is:
> terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
data.aws_instances.test: Refreshing state...
Error: Your query returned no results. Please change your search criteria and try again.
on main.tf line 21, in data "aws_instances" "test":
21: data "aws_instances" "test" {
how can I fix it?
You should use depends_on option into data.aws_instances.test.
like :
data "aws_instances" "test" {
instance_tags = {
name = "example"
}
instance_state_names = ["pending", "running", "shutting-down", "terminated", "stopping", "stopped"]
depends_on = [
"aws_instance.example"
]
}
It means that build data.aws_instances.test after make resource.aws_instance.example.
Sometimes, We need to use this option. Because of dependencies of aws resources.
See :
Here's a document about depends_on option.
You don't need a data source here. You can get the public IP address of the instance back from the resource itself, simplifying everything.
This should do the exact same thing:
resource "aws_instance" "example" {
ami = "ami-0994c095691a46fb5"
instance_type = "t2.small"
tags = {
name = "example"
}
}
output "ip" {
value = aws_instance.example.public_ip
}
I am trying to create an AWS lambda Function using terraform.
My terraform directory looks like
terraform
iam-policies
main.tf
lambda
files/
main.tf
main.tf
I have my lambda function stored inside /terraform/lambda/files/lambda_function.py.
Whenever I terraform apply, I have a "null_resource" that executes some commands in local machine that will zip the python file
variable "pythonfile" {
description = "lambda function python filename"
type = "string"
}
resource "null_resource" "lambda_preconditions" {
triggers {
always_run = "${uuid()}"
}
provisioner "local-exec" {
command = "rm -rf ${path.module}/files/zips"
}
provisioner "local-exec" {
command = "mkdir -p ${path.module}/files/zips"
}
provisioner "local-exec" {
command = "cp -R ${path.module}/files/${var.pythonfile} ${path.module}/files/zips/lambda_function.py"
}
provisioner "local-exec" {
command = "cd ${path.module}/files/zips && zip -r lambda.zip ."
}
}
My "aws_lambda_function" resource looks like this.
resource "aws_lambda_function" "lambda_function" {
filename = "${path.module}/files/zips/lambda.zip"
function_name = "${format("%s-%s-%s-lambda-function", var.name, var.environment, var.function_name)}"
role = "${aws_iam_role.iam_for_lambda.arn}"
handler = "lambda_function.lambda_handler"
source_code_hash = "${base64sha256(format("%s/files/zips/lambda.zip", path.module))}", length(path.cwd) + 1, -1)}")}"
runtime = "${var.function_runtime}"
timeout = "${var.function_timeout}"
memory_size = "${var.function_memory}"
environment {
variables = {
region = "${var.region}"
name = "${var.name}"
environment = "${var.environment}"
}
}
vpc_config {
subnet_ids = ["${var.subnet_ids}"]
security_group_ids = ["${aws_security_group.lambda_sg.id}"]
}
depends_on = [
"null_resource.lambda_preconditions"
]
}
Problem:
Whenever I change the lambda_function.py file and terraform apply again, everything works fine but the actual code in the lambda function do not change.
Also if I delete all the terraform state files and apply again, the new change is propagated without any problem.
What could be the possible reason for this?
Instead of using null_resource, I used the archive_file data source that creates the zip file automatically if new changes are detected. Next I took a reference from the archive_file data in the lambda resource source_code_hash attribute.
archive_file data source
data "archive_file" "lambda_zip" {
type = "zip"
output_path = "${path.module}/files/zips/lambda.zip"
source {
content = "${file("${path.module}/files/ebs_cleanup_lambda.py")}"
filename = "lambda_function.py"
}
}
The lambda resource
resource "aws_lambda_function" "lambda_function" {
filename = "${path.module}/files/zips/lambda.zip"
function_name = "${format("%s-%s-%s-lambda-function", var.name, var.environment, var.function_name)}"
role = "${aws_iam_role.iam_for_lambda.arn}"
handler = "lambda_function.lambda_handler"
source_code_hash = "${data.archive_file.lambda_zip.output_base64sha256}"
runtime = "${var.function_runtime}"
timeout = "${var.function_timeout}"
memory_size = "${var.function_memory}"
environment {
variables = {
region = "${var.region}"
name = "${var.name}"
environment = "${var.environment}"
}
}
vpc_config {
subnet_ids = ["${var.subnet_ids}"]
security_group_ids = ["${aws_security_group.lambda_sg.id}"]
}
}