Terraform: Mount volume - digital-ocean

According to documentation, using terraform, I'm able to create a droplet on digital ocean:
resource "digitalocean_volume" "foobar" {
region = "nyc1"
name = "baz"
size = 100
description = "an example volume"
}
So, I'm also able to add a volume to it:
resource "digitalocean_droplet" "foobar" {
name = "baz"
size = "1gb"
image = "coreos-stable"
region = "nyc1"
volume_ids = ["${digitalocean_volume.foobar.id}"]
}
I'd like to know how to mount this on a desired location.
I need to mount it automatically. I mean, when droplet is up I need to the volume is mounted. I was thinking about using chef...
Any ideas?

To mount the volume automatically, you can use user_data via cloud init to run a script as follow:
This is how your digitalocean_droplet resources should reflect:
resource "digitalocean_droplet" "foobar" {
name = "baz"
size = "1gb"
image = "coreos-stable"
region = "nyc1"
volume_ids = ["${digitalocean_volume.foobar.id}"]
# user data
user_data = "${data.template_cloudinit_config.cloudinit-example.rendered}"
}
Then your cloud.init file that contains the cloudinit_config should be as bellow. It will reference the shell script in ${TERRAFORM_HOME}/scripts/disk.sh that would mount your volume automatically:
provider "cloudinit" {}
data "template_file" "shell-script" {
template = "${file("scripts/disk.sh")}"
}
data "template_cloudinit_config" "cloudinit-example" {
gzip = false
base64_encode = false
part {
content_type = "text/x-shellscript"
content = "${data.template_file.shell-script.rendered}"
}
}
The shell script to mount the volume automatically on startup is in ${TERRAFORM_HOME}/scripts/disk.sh
It will first check if a file system exist. If true it wouldn't format the disk if not it will
#!/bin/bash
DEVICE_FS=`blkid -o value -s TYPE ${DEVICE}`
if [ "`echo -n $DEVICE_FS`" == "" ] ; then
mkfs.ext4 ${DEVICE}
fi
mkdir -p /data
echo '${DEVICE} /data ext4 defaults 0 0' >> /etc/fstab
mount /data
I hope this helps

Mounting the volume needs to be done from the guest OS itself using mount, fstab, etc.
The digital ocean docs cover this here.
Using Chef you could use resource_mount to mount it in an automated fashion.
The device name will be /dev/disk/by-id/scsi-0DO_Volume_YOUR_VOLUME_NAME. So, using the example from the Terraform docs, it would be /dev/disk/by-id/scsi-0DO_Volume_baz.

Related

How to sync 1) EC2 volume attachment's name with 2) a variable in EC2's user-data?

Rephrasing the title, how to not have to guess the EC2 attached volume/device's name in generated aws_instance's user_data? That is, how to not be forced to have an additional attached_device_actual_name variable in below Terraform locals?
Here's the relevant Terraform configuration:
locals {
attached_device_name = "/dev/sdf" # Used in `aws_volume_attachment`.
attached_device_actual_name = "/dev/nvme1n1" # Used in `templatefile`.
}
resource "aws_instance" "foo" {
user_data = templatefile("./user-data.sh.tpl", {
attached_device_name = local.attached_device_actual_name
})
}
resource "aws_volume_attachment" "foo" {
device_name = local.attached_device_name
instance_id = aws_instance.foo.id
}
The docs say
The device names that you specify for NVMe EBS volumes in a block device mapping are renamed using NVMe device names (/dev/nvme[0-26]n1).
Does the above-quoted part "device names [...] are renamed" also imply that one should not use these, reserved /dev/nvme... names? Indeed, if I set local.attached_device_name to /dev/nvme1n1, which is a "correct" guess in this case, this error pops up:
Error: Error attaching volume (vol-some_id) to instance (i-some_id), message: "Value (/dev/nvme1n1) for parameter device is invalid. /dev/nvme1n1 is not a valid EBS device name.", code: "InvalidParameterValue"
"/dev/nvme1n1 is not a valid EBS device name."
The goal was to have user_data synced with the attached-volume's name and then be able to wait for the volume:
DEVICE="${attached_device_name}"
while [ ! -e "$${DEVICE}" ] ; do
echo "Waiting for $DEVICE ..."
sleep 1
done
Env:
Terraform 1.1.2
hashicorp/aws 4.10.0

cannot ssh into instance created from sourceImage "google_compute_instance_from_machine_image"

I am creating an instance from a sourceImage, using this terraform template:
resource "tls_private_key" "sandbox_ssh" {
algorithm = "RSA"
rsa_bits = 4096
}
output "tls_private_key_sandbox" { value = "${tls_private_key.sandbox_ssh.private_key_pem}" }
locals {
custom_data1 = <<CUSTOM_DATA
#!/bin/bash
CUSTOM_DATA
}
resource "google_compute_instance_from_machine_image" "sandboxvm_test_fromimg" {
project = "<proj>"
provider = google-beta
name = "sandboxvm-test-fromimg"
zone = "us-central1-a"
tags = ["test"]
source_machine_image = "projects/<proj>/global/machineImages/sandboxvm-test-img-1"
can_ip_forward = false
labels = {
owner = "test"
purpose = "test"
ami = "sandboxvm-test-img-1"
}
metadata = {
ssh-keys = "${var.sshuser}:${tls_private_key.sandbox_ssh.public_key_openssh}"
}
network_interface {
network = "default"
access_config {
// Include this section to give the VM an external ip address
}
}
metadata_startup_script = local.custom_data1
}
output "instance_ip_sandbox" {
value = google_compute_instance_from_machine_image.sandboxvm_test_fromimg.network_interface.0.access_config.0.nat_ip
}
output "user_name" {
value = var.sshuser
}
I can't even ping / netcat, neither the private or public IP of the VM created. Even the "serial port" ssh, passed inside custom script helps.
I'm suspecting, that since it is a "google beta" capability, is it even working / reliable?
Maybe we just can't yet, create VMs i.e GCEs from "SourceImages" in GCP, Unless proven otherwise, with a simple goof-up not very evident in my TF.
I could solve it actually, and all this somewhere sounds very sick of GCE.
Problem was while creating the base image, the instance I had chosen has had the following :
#sudo update-alternatives --install /usr/bin/python python /usr/bin/python3.6 2
#sudo update-alternatives --install /usr/bin/python3 python /usr/bin/python3.7 1
Maybe I should try with "python3" instead of "python",
but when instantiating GCEs basis this MachineImage, it looks for a rather deprecated "python2.7" and not "python3" and complained of missing / unreadable packages like netiplan etc.
Commenting the "update-alternatives" and installing python3.6 and python3.7 explicitly did the trick!

How to execute scripts (remote-exec) under Google Cloud Container Optimized OS using Terraform

I know that container optimized OS is mostly "noexec". I have a usecase when I need to execute some simple scripts in my home directory, copy files from my docker image to host etc. There are no problems with this when I log into the instance with SSH. But with terraform it seems not to work:
resource "null_resource" "test_upload2" {
count = length(var.nodes)
provisioner "remote-exec" {
connection {
type = "ssh"
host = google_compute_address.static[count.index].address
private_key = file("keys/private_key")
user = var.admin_username
script_path = "/home/hyperledger/provision.sh"
}
inline = [
"ls"
]
}
depends_on = [google_compute_instance.peer-blockchain-vm, null_resource.test_upload]
}
I get the following error message:
null_resource.test_upload[0] (remote-exec): bash: /home/hyperledger/provision.sh: Permission denied
Error: error executing "/home/hyperledger/provision.sh": Process exited with status 126
Is there a way to perform this purely with Terraform?
Seems not nice to outsource this logic to some local shell script and achieve the goal with "local-exec".
For now I've found a solution.
When I create the instance I set the following startup-script:
resource "google_compute_instance" "my_vm" {
...
metadata_startup_script = "mkdir -p /home/hyperledger/tmp/;sudo mount -t tmpfs -o size=100M tmpfs /home/hyperledger/tmp/"
}
It creates an in-memory disk. The script-path in resource should then be redefined as follows:
script_path = "/home/hyperledger/tmp/provision.sh"
All scripts in this temporary directory can be executed.
The first executed provisioner should should change the owner of home directory since it was created with root owner above:
provisioner "remote-exec" {
connection {
type = "ssh"
private_key = file(var.private_key)
user = var.admin_username
script_path = "/home/hyperledger/tmp/provision.sh"
}
inline = [
"sudo chown -R hyperledger:hyperledger /home/hyperledger"
]
}

Is there a way to confirm user_data ran successfully with Terraform for EC2?

I'm wondering if it's possible to know when the script in user data executes completely?
data "template_file" "script" {
template = file("${path.module}/installing.sh")
}
data "template_cloudinit_config" "config" {
gzip = false
base64_encode = false
# Main cloud-config configuration file.
part {
filename = "install.sh"
content = "${data.template_file.script.rendered}"
}
}
resource "aws_instance" "web" {
ami = "ami-04e7b4117bb0488e4"
instance_type = "t2.micro"
key_name = "KEY"
vpc_security_group_ids = [aws_default_security_group.default.id]
subnet_id = aws_default_subnet.default_az1.id
associate_public_ip_address = true
iam_instance_profile = "Role_S3"
user_data = data.template_cloudinit_config.config.rendered
tags = {
Name = "Terraform-Ansible"
}
}
And in the content of the script I have this.
It tells me Terraform successfully apply the changes, but the script is still running, is there a way I can monitor that?
#!/usr/bin/env bash
exec > >(tee /var/log/user-data.log|logger -t user-data -s 2>/dev/console) 2>&1
echo BEGIN
sudo apt update
sudo apt upgrade -y
sudo apt install -y unzip
echo END
No, You can not confirm the user data status from the terraform, as it posts launching script that executes once EC2 instance launched. But you will need some extra effort on init script that one way to check.
How to check User Data status while launching the instance in aws
If you do something that is mentioned above to make some marker file once user data completed, then you can try this to check.
resource "null_resource" "user_data_status_check" {
provisioner "local-exec" {
on_failure = "fail"
interpreter = ["/bin/bash", "-c"]
command = <<EOT
echo -e "\x1B[31m wait for few minute for instance warm up, adjust accordingly \x1B[0m"
# wait 30 sec
sleep 30
ssh -i yourkey.pem instance_ip ConnectTimeout=30 -o 'ConnectionAttempts 5' test -f "/home/user/markerfile.txt" && echo found || echo not found
if [ $? -eq 0 ]; then
echo "user data sucessfully executed"
else
echo "Failed to execute user data"
fi
EOT
}
triggers = {
#remove this once you test it out as it should run only once
always_run ="${timestamp()}"
}
depends_on = ["aws_instance.my_instance"]
}
so this script will check marker file on the newly launch server by doing ssh with timeout 30 seconds with max attempts 5.
Here are some pointers to remember:
User data shell scripts must start with the Shebang #! characters and the path to the interpreter you want to read the script (commonly /bin/bash).
Scripts entered as user data are run as the root user, so no need to use the sudo command in the init script.
When a user data script is processed, it is copied to and run from /var/lib/cloud/instances/instance-id/. The script is not deleted after it is run and can be found in this directory with the name user-data.txt So to check if your shell script made to the server refer this directory and the file.
The cloud-init output log file (/var/log/cloud-init-output.log) captures console output of your user_data shell script. to know how your user_data shell script was executed and its output check this file.
Source: https://www.middlewareinventory.com/blog/terraform-aws-ec2-user_data-example/
Well I use these two ways to confirm.
At the end of cloudinit config file this line sends me a notification through whatsapp (using callmebot). Thus no matter how much does it take to setup, I always get notified when it's ready to use. I watch some series or read something in that time. no time wasted.
curl -X POST "https://api.callmebot.com/whatsapp.php?phone=12345678910&text=Ec2+transcoder+setup+complete&apikey=12345"
At the end of cloudinit config this line runs -
echo "for faster/visual confirmation of above execution.."
wget https://www.sample-videos.com/video123/mp4/720/big_buck_bunny_720p_1mb.mp4 -O /home/ubuntu/dpnd_comp.mp4
When I sign in to the instance I can see directly the file.
And I'm loving it. Hope this helps someone. Also, don't forget to tell me your method too.

Why don't I create Amazon lightsailclient and set up UserData?

Why don't I create Amazon lightsailclient and set up UserData?
var shuju = new CreateInstancesRequest()
{
BlueprintId = "centos_7_1901_01",
BundleId = "micro_2_0",
AvailabilityZone = "ap-northeast-1d",
InstanceNames = new System.Collections.Generic.List<string>() { "test" },
UserData = "echo root:test123456- |sudo chpasswd root\r\nsudo sed -i 's/^#\\?PermitRootLogin.*/PermitRootLogin yes/g' /etc/ssh/sshd_config;\r\nsudo sed -i 's/^#\\?PasswordAuthentication.*/PasswordAuthentication yes/g' /etc/ssh/sshd_config;\r\nsudo reboot\r\n"
};
If you wish to run a User Data script on a Linux instance, then the first line must begin with #!.
It uses the same technique as an Amazon EC2 instance, so see: Running Commands on Your Linux Instance at Launch - Amazon Elastic Compute Cloud