Terraform: How to config null_resource with murtiply connection - amazon-web-services

Suppose that the ec2 module has two server, dynamic created. Like:
module "ec2-web" {
source = "terraform-aws-modules/ec2-instance/aws"
version = "4.1.4"
count = 2
name = "${local.appName}-webserver-${count.index + 1}"
.....
}
Now I have a null_resource config file, which has a connection only:
resource "null_resource" "web-upload" {
depends_on = [module.ec2-web]
connection {
type = "ssh"
host = module.ec2-web[0].public_ip
user = "ec2-user"
password = ""
private_key = file("keypair/a-ssh-key.pem")
timeout = "2m"
}
provisioner "remote-exec" {
inline = [
"sudo mkdir -p /var/www/html",
"sudo chown -R ec2-user:ec2-user /var/www/html",
]
}
provisioner "file" {
source = "web/"
destination = "/var/www/html"
}
}
Now how should I update any config can let finally terraform upload files to both server accordingly?

You would use the same approach with the count meta-argument:
resource "null_resource" "web-upload" {
count = 2
connection {
type = "ssh"
host = module.ec2-web[count.index].public_ip
user = "ec2-user"
password = ""
private_key = file("keypair/a-ssh-key.pem")
timeout = "2m"
}
provisioner "remote-exec" {
inline = [
"sudo mkdir -p /var/www/html",
"sudo chown -R ec2-user:ec2-user /var/www/html",
]
}
provisioner "file" {
source = "web/"
destination = "/var/www/html"
}
}
The explicit dependency using depends_on meta-argument is not required as the reference to the module output is used (module.ec2-web[count.index].public_ip). This means terraform will wait for the module to be done with creating resources prior to attempting the null_resource.

Related

Issue with custom log routing with ECS fargate, Firelense and fluentbit to cloudwatch

I am trying to get logs from my app container to cloudwatch using firelesne and fluentbit by aws, and not getting it.
Application writes log on /opt/app/log/*.log
here is my task definition and fluentbit config file.
`
resource "aws_ecs_task_definition" "batching_task" {
family = "${var.project}-${var.environment}-node1"
container_definitions = jsonencode([
{
essential = true
image = "fluent-bit image"
repositoryCredentials = {
credentialsParameter = var.docker_login
}
name = "log_router"
firelensConfiguration = {
type = "fluentbit"
options={
enable-ecs-log-metadata ="false"
config-file-type = "file"
config-file-value = "/fluent-bit.conf"
}
}
logConfiguration = {
logDriver = "awslogs"
options = {
awslogs-group = "/ecs/app-${var.environment}"
awslogs-region = "us-east-1"
awslogs-create-group = "true"
awslogs-stream-prefix= "firelens"
}
}
mountPoints = [
{
"containerPath" : "/opt/app/log/",
"sourceVolume" : "var-log"
}
]
memoryReservation = 50
},
{
name = "node"
image = "app from private docker registry"
repositoryCredentials = {
credentialsParameter = var.docker_login
}
essential = true
mountPoints = [
{
"containerPath" : "/opt/app/log/",
"sourceVolume" : "var-log"
}
]
environment = [
{
name = "APP_PORT"
value = "80"
]
portMappings = [
{
containerPort = 80
hostPort = 80
protocol = "tcp"
}
]
logConfiguration = {
logDriver = "awsfirelens"
options = {
Name = "cloudwatch"
region = "us-east-1"
enable-ecs-log-metadata = "false"
log_group_name = "/ecs/app"
auto_create_group = "true"
log_stream_name = "$(ecs_task_id)"
retry_limit = "2"
}
}
dependsOn = [
{
"containerName": "log_router",
"condition": "START"
}
]
}
])
volume {
name = "var-log"
}
execution_role_arn = aws_iam_role.app.arn
task_role_arn = aws_iam_role.app.arn
network_mode = "awsvpc"
requires_compatibilities = ["FARGATE"]
cpu = var.fargate_cpu
memory = var.fargate_memory
}
`
Dockerfile from where Fluentbit image is created
`
FROM amazon/aws-for-fluent-bit:latest
ADD fluent-bit.conf /fluent-bit.conf
ADD test.log /test.log
ENV AWS_REGION=us-east-1
ARG AWS_ACCESS_KEY_ID # you could give this a default value as well
ENV AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID
ARG AWS_SECRET_ACCESS_KEY # you could give this a default value as well
ENV AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY
RUN mkdir ~/.aws && cd ~/.aws && touch credentials
RUN echo -e '\
[default]\n\
$AWS_ACCESS_KEY_ID\n\
$AWS_SECRET_ACCESS_KEY\
' > ~/.aws/credentials
`
Fluent-bit.conf
`
[SERVICE]
Flush 5
Deamon off
[INPUT]
# test log
Name tail
Path /opt/app/log/test.log
Tag test
[OUTPUT]
# test log
Name cloudwatch_logs
Match test*
region us-east-1
log_group_name /ecs/app
log_stream_name app-$(ecs_task_id)
auto_create_group true
log_retention_days 90
`
I have been following this docs
https://github.com/aws-samples/amazon-ecs-firelens-under-the-hood/tree/9ecd26e02cb5e13bb5c312c651a3ac601f7f42cd/fluent-bit-log-pipeline
https://docs.fluentbit.io/manual/v/1.0/configuration/file
https://github.com/aws-samples/amazon-ecs-firelens-examples/blob/mainline/examples/fluent-bit/ecs-log-collection/task-definition-tail.json
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/firelens-example-taskdefs.html
I have two log streams created which are part of task-definition and it only forwards stdout logs I need app logs which are not being forwarded.
log streams which are part of fluent-bit config are not created
Que: 1) how does my log router sidecar container reads log from the app containers filesystem, do I have to set anything for that?
2) is my configuration file okay does it need anything else?
3) what m I missing?

how to run a bash script in gcp vm using terraform

hay folks ,
I want to run a script in gcp machine for that i created a resource below file
disk = google_compute_disk.default2.id
instance = google_compute_instance.default.id
} # aatach disk to vm
resource "google_compute_firewall" "firewall" {
name = "gritfy-firewall-externalssh"
network = "default"
allow {
protocol = "tcp"
ports = ["22"]
}
source_ranges = ["0.0.0.0/0"]
target_tags = ["externalssh"]
} # allow ssh
resource "google_compute_address" "static" {
name = "vm-public-address"
project = "fit-visitor-305606"
region = "asia-south1"
depends_on = [ google_compute_firewall.firewall ]
} # reserve ip
resource "google_compute_instance" "default" {
name = "new"
machine_type = "custom-8-16384"
zone = "asia-south1-a"
tags = ["foo", "bar"]
boot_disk {
initialize_params {
image = "centos-cloud/centos-7"
}
}
network_interface {
network = "default"
access_config {
nat_ip = google_compute_address.static.address
}
}
metadata = {
ssh-keys = "${var.user}:${file(var.publickeypath)}"
}
lifecycle {
ignore_changes = [attached_disk]
}
provisioner "file" {
source = "autoo.sh"
destination = "/tmp/autoo.sh"
}
provisioner "remote-exec" {
connection {
host = google_compute_address.static.address
type = "ssh"
user = var.user
timeout = "500s"
private_key = file(var.privatekeypath)
}
inline = [
"sudo yum -y install epel-release",
"sudo yum -y install nginx",
"sudo nginx -v",
]
}
} # Create VM
resource "google_compute_disk" "default2" {
name = "test-disk"
type = "pd-balanced"
zone = "asia-south1-a"
image = "centos-7-v20210609"
size = 100
} # Create Disk
using this I am able to create VM and disk and also able to attach vm to disk but not able to run my script
error log are =
and private key part is working fine the key is assign to VM and I try to connect with that key it is connected may the problem with the provision part only
any help or guidance would be really helpful...
Like error message says, you need connection configuration for provisioner. Also you need remote-exec provisoner for running scripts.
provisioner "file" {
source = "autoo.sh"
destination = "/tmp/autoo.sh"
connection {
type = "ssh"
user = var.user
private_key = file(var.privatekeypath)
}
}
provisioner "remote-exec" {
inline = [
"chmod +x /tmp/autoo.sh",
"cd /tmp",
"./autoo.sh"
]
connection {
type = "ssh"
user = var.user
private_key = file(var.privatekeypath)
}
source: https://stackoverflow.com/a/36668395/5454632

Relative paths in Terraform

I am trying to create an AWS lambda Function using terraform.
My terraform directory looks like
terraform
iam-policies
main.tf
lambda
files/
main.tf
main.tf
I have my lambda function stored inside /terraform/lambda/files/lambda_function.py.
Whenever I terraform apply, I have a "null_resource" that executes some commands in local machine that will zip the python file
variable "pythonfile" {
description = "lambda function python filename"
type = "string"
}
resource "null_resource" "lambda_preconditions" {
triggers {
always_run = "${uuid()}"
}
provisioner "local-exec" {
command = "rm -rf ${path.module}/files/zips"
}
provisioner "local-exec" {
command = "mkdir -p ${path.module}/files/zips"
}
provisioner "local-exec" {
command = "cp -R ${path.module}/files/${var.pythonfile} ${path.module}/files/zips/lambda_function.py"
}
provisioner "local-exec" {
command = "cd ${path.module}/files/zips && zip -r lambda.zip ."
}
}
My "aws_lambda_function" resource looks like this.
resource "aws_lambda_function" "lambda_function" {
filename = "${path.module}/files/zips/lambda.zip"
function_name = "${format("%s-%s-%s-lambda-function", var.name, var.environment, var.function_name)}"
role = "${aws_iam_role.iam_for_lambda.arn}"
handler = "lambda_function.lambda_handler"
source_code_hash = "${base64sha256(format("%s/files/zips/lambda.zip", path.module))}", length(path.cwd) + 1, -1)}")}"
runtime = "${var.function_runtime}"
timeout = "${var.function_timeout}"
memory_size = "${var.function_memory}"
environment {
variables = {
region = "${var.region}"
name = "${var.name}"
environment = "${var.environment}"
}
}
vpc_config {
subnet_ids = ["${var.subnet_ids}"]
security_group_ids = ["${aws_security_group.lambda_sg.id}"]
}
depends_on = [
"null_resource.lambda_preconditions"
]
}
Problem:
Whenever I change the lambda_function.py file and terraform apply again, everything works fine but the actual code in the lambda function do not change.
Also if I delete all the terraform state files and apply again, the new change is propagated without any problem.
What could be the possible reason for this?
Instead of using null_resource, I used the archive_file data source that creates the zip file automatically if new changes are detected. Next I took a reference from the archive_file data in the lambda resource source_code_hash attribute.
archive_file data source
data "archive_file" "lambda_zip" {
type = "zip"
output_path = "${path.module}/files/zips/lambda.zip"
source {
content = "${file("${path.module}/files/ebs_cleanup_lambda.py")}"
filename = "lambda_function.py"
}
}
The lambda resource
resource "aws_lambda_function" "lambda_function" {
filename = "${path.module}/files/zips/lambda.zip"
function_name = "${format("%s-%s-%s-lambda-function", var.name, var.environment, var.function_name)}"
role = "${aws_iam_role.iam_for_lambda.arn}"
handler = "lambda_function.lambda_handler"
source_code_hash = "${data.archive_file.lambda_zip.output_base64sha256}"
runtime = "${var.function_runtime}"
timeout = "${var.function_timeout}"
memory_size = "${var.function_memory}"
environment {
variables = {
region = "${var.region}"
name = "${var.name}"
environment = "${var.environment}"
}
}
vpc_config {
subnet_ids = ["${var.subnet_ids}"]
security_group_ids = ["${aws_security_group.lambda_sg.id}"]
}
}

terraform depends_on for provisioner file

i want data "template_file" in below terraform code to execute after provisioner "file" (basically ansible playbook) is copied to the ec2 instance. I am not able to successfully use "depends_on" in this scenario. Can some one please help me how can i achieve this? below is the sample code snippet.
resource "aws_eip" "opendj-source-ami-eip" {
instance = "${aws_instance.opendj-source-ami-server.id}"
vpc = true
connection {
host = "${aws_eip.opendj-source-ami-eip.public_ip}"
user = "ubuntu"
timeout = "3m"
agent = false
private_key = "${file(var.private_key)}"
}
provisioner "file" {
source = "./${var.copy_password_file}"
destination = "/home/ubuntu/${var.copy_password_file}"
}
provisioner "file" {
source = "./${var.ansible_playbook}"
destination = "/home/ubuntu/${var.ansible_playbook}"
}
}
data "template_file" "run-ansible-playbooks" {
template = <<-EOF
#!/bin/bash
ansible-playbook /home/ubuntu/${var.copy_password_file} && ansible-playbook /home/ubuntu/${var.ansible_playbook}
EOF
#depends_on = ["<< not sure what to put here>>"]
}
The correct format for depends_on is pegged to the resource as a whole; so the format in your case would look like:
data "template_file" "run-ansible-playbooks" {
template = <<-EOF
#!/bin/bash
ansible-playbook /home/ubuntu/${var.copy_password_file} && ansible-playbook /home/ubuntu/${var.ansible_playbook}
EOF
depends_on = ["aws_eip.opendj-source-ami-eip"]
}

parametrized terraform template

I have a terraform project to create a 99 virtual machines in Openstack i can not use cloud-init and i must modify the hostname of every machine
hostname.tplt :
sudo sed -i -e "s/debian[7-9]/${host_name}/g" /etc/hostname
sudo invoke-rc.d hostname.sh start
sudo sed -i -e "s/127\.0\.1\.1.*/127.0.1.1\t${host_name}.${domain_name} ${host_name}/g" /etc/hosts
sudo apt-get update && sudo apt-get -y install dbus && sudo hostnamectl set-hostname ${host_name}
part of main.tf :
data "template_file" "hostname_servers" {
template = "${file("templates/hostname.tplt")}"
vars {
host_name = "${format("%s-proxy-%02d", var.prefix_name, count.index+1)}"
domain_name = "${var.domain_name}"
}
}
Ressource
resource "openstack_compute_instance_v2" "proxy-instance" {
count = "${var.count_proxy}"
name = "${format("%s-proxy-%02d", var.prefix_name, count.index+1)}"
image_name = "${var.image}"
flavor_name = "${var.flavor_proxy}"
network {
name = "${format("%s-%s", var.prefix_name, var.network_name)}"
}
connection {
user = "${var.user}"
}
provisioner "remote-exec" {
inline = [
"${data.template_file.hostname_servers.rendered}"
]
}
}
the use case :
when i start a terraform plan it works for the proxy-instance resource but i need to do that for the 99 machines,
i don't like to duplicate the templates data 99 times,
and i don't know how to parammetrize the template to be able to apply for all the machines
any idea ?
If you set count to the same value on multiple resources then you can use count.index to create correspondences between the instances of one block and the instances of another, like this:
data "template_file" "hostname_servers" {
count = "${var.count_proxy}"
template = "${file("templates/hostname.tplt")}"
vars {
host_name = "${format("%s-proxy-%02d", var.prefix_name, count.index+1)}"
domain_name = "${var.domain_name}"
}
}
resource "openstack_compute_instance_v2" "proxy-instance" {
count = "${var.count_proxy}"
name = "${format("%s-proxy-%02d", var.prefix_name, count.index+1)}"
image_name = "${var.image}"
flavor_name = "${var.flavor_proxy}"
network {
name = "${format("%s-%s", var.prefix_name, var.network_name)}"
}
connection {
user = "${var.user}"
}
provisioner "remote-exec" {
inline = [
# use count.index to match the template instance corresponding
# to this compute instance instance.
"${data.template_file.hostname_servers.*.rendered[count.index]}"
]
}
}