Fail to use terraform provisioner with aws lightsail - amazon-web-services

I am having trouble to use the provisioner (both "file" and "remote-exec") with aws lightsail. For the "file" provisioner, I kept getting a dialup error to port 22 with connection refused, the "remote-exec" gives me a timeout error. I can see it keeps trying to connect to the instance but it just can not connect to it.
For the file provisioner, I have also tried with scp directly and it works just fine.
A sample snippet of the connection block I am using is as the following:
resource "aws_lightsail_instance" "han-mongo" {
name = "han-mongo"
availability_zone = "us-east-1b"
blueprint_id = "ubuntu_16_04"
bundle_id = "nano_1_0"
key_pair_name = "my_key_pair"
user_data = "${file("userdata.sh")}"
provisioner "file" {
source = "file.service"
destination = "/home/ubuntu"
connection {
type = "ssh"
private_key = "${file("my_key.pem")}"
user = "ubuntu"
timeout = "20s"
}
}
}

In addition to the authentication information, it's also necessary to tell Terraform which IP address it should use to connect, like this:
connection {
type = "ssh"
host = "${self.public_ip_address}"
private_key = "${file("my_key.pem")}"
user = "ubuntu"
timeout = "20s"
}
For some resources Terraform is able to automatically infer some of the connection details from the resource attributes, but at present that is not supported for Lightsail instances and so it's necessary to specify the host argument explicitly.

Related

Django Terraform digitalOcean re-create environment in new droplet

I have saas based Django app, I want, when a customer asks me to use my software, then i will auto provision new droplet and auto-deploy the app there, and the info should be saved in my database, like ip, customer name, database info etc.
This is my terraform script and it is working very well coz, the database is now running on
terraform {
required_providers {
digitalocean = {
source = "digitalocean/digitalocean"
version = "~> 2.0"
}
}
}
provider "digitalocean" {
token = "dop_v1_60f33a1<MyToken>a363d033"
}
resource "digitalocean_droplet" "web" {
image = "ubuntu-18-04-x64"
name = "web-1"
region = "nyc3"
size = "s-1vcpu-1gb"
ssh_keys = ["93:<The SSH finger print>::01"]
connection {
host = self.ipv4_address
user = "root"
type = "ssh"
private_key = file("/home/py/.ssh/id_rsa") # it works
timeout = "2m"
}
provisioner "remote-exec" {
inline = [
"export PATH=$PATH:/usr/bin",
# install docker-compse
# install docker
# clone my github repo
"docker-compose up --build -d"
]
}
}
I want, when i run the commands, it should be create new droplet, new database instance and connect the database with my django .env file.
Everything should be auto created. Can anyone please help me how can I do it?
or my approach is wrong? in this situation, what would be the best solution?

How to execute scripts (remote-exec) under Google Cloud Container Optimized OS using Terraform

I know that container optimized OS is mostly "noexec". I have a usecase when I need to execute some simple scripts in my home directory, copy files from my docker image to host etc. There are no problems with this when I log into the instance with SSH. But with terraform it seems not to work:
resource "null_resource" "test_upload2" {
count = length(var.nodes)
provisioner "remote-exec" {
connection {
type = "ssh"
host = google_compute_address.static[count.index].address
private_key = file("keys/private_key")
user = var.admin_username
script_path = "/home/hyperledger/provision.sh"
}
inline = [
"ls"
]
}
depends_on = [google_compute_instance.peer-blockchain-vm, null_resource.test_upload]
}
I get the following error message:
null_resource.test_upload[0] (remote-exec): bash: /home/hyperledger/provision.sh: Permission denied
Error: error executing "/home/hyperledger/provision.sh": Process exited with status 126
Is there a way to perform this purely with Terraform?
Seems not nice to outsource this logic to some local shell script and achieve the goal with "local-exec".
For now I've found a solution.
When I create the instance I set the following startup-script:
resource "google_compute_instance" "my_vm" {
...
metadata_startup_script = "mkdir -p /home/hyperledger/tmp/;sudo mount -t tmpfs -o size=100M tmpfs /home/hyperledger/tmp/"
}
It creates an in-memory disk. The script-path in resource should then be redefined as follows:
script_path = "/home/hyperledger/tmp/provision.sh"
All scripts in this temporary directory can be executed.
The first executed provisioner should should change the owner of home directory since it was created with root owner above:
provisioner "remote-exec" {
connection {
type = "ssh"
private_key = file(var.private_key)
user = var.admin_username
script_path = "/home/hyperledger/tmp/provision.sh"
}
inline = [
"sudo chown -R hyperledger:hyperledger /home/hyperledger"
]
}

Get endpoint for Terraform with aws_elasticache_replication_group

I have what I think is a simple Terraform config for AWS ElastiCache with Redis:
resource "aws_elasticache_replication_group" "my_replication_group" {
replication_group_id = "my-rep-group",
replication_group_description = "eln00b"
node_type = "cache.m4.large"
port = 6379
parameter_group_name = "default.redis5.0.cluster.on"
snapshot_retention_limit = 1
snapshot_window = "00:00-05:00"
subnet_group_name = "${aws_elasticache_subnet_group.my_subnet_group.name}"
automatic_failover_enabled = true
cluster_mode {
num_node_groups = 1
replicas_per_node_group = 1
}
}
I tried to define the endpoint output using:
output "my_cache" {
value = "${aws_elasticache_replication_group.my_replication_group.primary_endpoint_address}"
}
When I run an apply through terragrunt I get:
Error: Error running plan: 1 error(s) occurred:
module.mod.output.my_cache: Resource 'aws_elasticache_replication_group.my_replication_group' does not have attribute 'primary_endpoint_address' for variable 'aws_elasticache_replication_group.my_replication_group.primary_endpoint_address'
What am I doing wrong here?
The primary_endpoint_address attribute is only available for non cluster-mode Redis replication groups as mentioned in the docs:
primary_endpoint_address - (Redis only) The address of the endpoint for the primary node in the replication group, if the cluster mode is disabled.
When using cluster mode you should use configuration_endpoint_address instead to connect to the Redis cluster.

How to execute PowerShell command through Terraform

I am trying to create a Windows Ec2 instance from AMI and executing a powershell command on that as :
data "aws_ami" "ec2-worker-initial-encrypted-ami" {
filter {
name = "tag:Name"
values = ["ec2-worker-initial-encrypted-ami"]
}
}
resource "aws_instance" "my-test-instance" {
ami = "${data.aws_ami.ec2-worker-initial-encrypted-ami.id}"
instance_type = "t2.micro"
tags {
Name = "my-test-instance"
}
provisioner "local-exec" {
command = "C:\\ProgramData\\Amazon\\EC2-Windows\\Launch\\Scripts\\InitializeInstance.ps1 -Schedule",
interpreter = ["PowerShell"]
}
}
and I am facing following error :
aws_instance.my-test-instance: Error running command 'C:\ProgramData\Amazon\EC2-Windows\Launch\Scripts\InitializeInstance.ps1
-Schedule': exit status 1. Output: The term 'C:\ProgramData\Amazon\EC2-Windows\Launch\Scripts\InitializeInstance.ps1'
is not recognized as the name of a cmdlet, function, script file, or
operable program. Check the spelling of the name, or if a path was
included, verify that the path is correct and try again. At line:1
char:72
C:\ProgramData\Amazon\EC2-Windows\Launch\Scripts\InitializeInstance.ps1
<<<< -Schedule
CategoryInfo : ObjectNotFound: (C:\ProgramData...izeInstance.ps1:String) [],
CommandNotFoundException
FullyQualifiedErrorId : CommandNotFoundException
You are using a local-exec provisioner which runs the request powershell code on the workstation running Terraform:
The local-exec provisioner invokes a local executable after a resource
is created. This invokes a process on the machine running Terraform,
not on the resource.
It sounds like you want to execute the powershell script on the resulting instance in which case you'll need to use a remote-exec provisioner which will run your powershell on the target resource:
The remote-exec provisioner invokes a script on a remote resource
after it is created. This can be used to run a configuration
management tool, bootstrap into a cluster, etc.
You will also need to include connection details, for example:
provisioner "remote-exec" {
command = "C:\\ProgramData\\Amazon\\EC2-Windows\\Launch\\Scripts\\InitializeInstance.ps1 -Schedule",
interpreter = ["PowerShell"]
connection {
type = "winrm"
user = "Administrator"
password = "${var.admin_password}"
}
}
Which means this instance must also be ready to accept WinRM connections.
There are other options for completing this task though. Such as using userdata, which Terraform also supports. This might look like the following example:
Example of using a userdata file in Terraform
File named userdata.txt:
<powershell>
C:\\ProgramData\\Amazon\\EC2-Windows\\Launch\\Scripts\\InitializeInstance.ps1 -Schedule
</powershell>
Launch instance using the userdata file:
resource "aws_instance" "my-test-instance" {
ami = "${data.aws_ami.ec2-worker-initial-encrypted-ami.id}"
instance_type = "t2.micro"
tags {
Name = "my-test-instance"
}
user_data = "${file(userdata.txt)}"
}
The file interpolation will read the contents of the userdata file as string to pass to userdata for the instance launch. Once the instance launches it should run the script as you expect.
What Brian is claiming is correct, you will get "invalid or unknown key: interpreter" error.
To correctly run powershell you will need to run it as following, based on Brandon's answer:
provisioner "remote-exec" {
connection {
type = "winrm"
user = "Administrator"
password = "${var.admin_password}"
}
inline = [
"powershell -ExecutionPolicy Unrestricted -File C:\\ProgramData\\Amazon\\EC2-Windows\\Launch\\Scripts\\InitializeInstance.ps1 -Schedule"
]
}
Edit
To copy the files over to the machine use the below:
provisioner "file" {
source = "${path.module}/some_path"
destination = "C:/some_path"
connection {
host = "${azurerm_network_interface.vm_nic.private_ip_address}"
timeout = "3m"
type = "winrm"
https = true
port = 5986
use_ntlm = true
insecure = true
#cacert = "${azurerm_key_vault_certificate.vm_cert.certificate_data}"
user = var.admin_username
password = var.admin_password
}
}
Update:
Currently provisioners are not recommended by hashicorp, full instructions and explanation (it is long) can be found at: terraform.io/docs/provisioners/index.html
FTR: Brandon's answer is correct, except the example code provided for the remote-exec includes keys that are unsupported by the provisioner.
Neither command nor interpreter are supported keys.
https://www.terraform.io/docs/provisioners/remote-exec.html

nomad: Pull docker image from ECR with AWS Access and Secret keys

My problem
I have successfully deployed a nomad job with a few dozen Redis Docker containers on AWS, using the default Redis image from Dockerhub.
I've slightly altered the default config file created by nomad init to change the number of running containers, and everything works as expected
The problem is that the actual image I would like to run is in ECR, which requires AWS permissions (access and secret key), and I don't know how to send these.
Code
job "example" {
datacenters = ["dc1"]
type = "service"
update {
max_parallel = 1
min_healthy_time = "10s"
healthy_deadline = "3m"
auto_revert = false
canary = 0
}
group "cache" {
count = 30
restart {
attempts = 10
interval = "5m"
delay = "25s"
mode = "delay"
}
ephemeral_disk {
size = 300
}
task "redis" {
driver = "docker"
config {
# My problem here
image = "https://-whatever-.dkr.ecr.us-east-1.amazonaws.com/-whatever-"
port_map {
db = 6379
}
}
resources {
network {
mbits = 10
port "db" {}
}
}
service {
name = "global-redis-check"
tags = ["global", "cache"]
port = "db"
check {
name = "alive"
type = "tcp"
interval = "10s"
timeout = "2s"
}
}
}
}
}
What have I tried
Extensive Google Search
Reading the manual
Placing the aws credentials in the machine which runs the nomad file (using aws configure)
My question
How can nomad be configured to pull Docker containers from AWS ECR using the AWS credentials?
Pretty late for you, but aws ecr does not handle authentication in the way that docker expects. There you need to run sudo $(aws ecr get-login --no-include-email --region ${your region}) Running the returned command actually authenticates in a docker compliant way
Note that region is optional if aws cli is configured. Personally, I allocate an IAM role the box (allowing ecr pull/list/etc), so that I don't have to manually deal with credentials.
I don't use ECR, but if it acts like a normal docker registry, this is what I do for my registry, and it works. Assuming the previous sentence, it should work fine for you as well:
config {
image = "registry.service.consul:5000/MYDOCKERIMAGENAME:latest"
auth {
username = "MYMAGICUSER"
password = "MYMAGICPASSWORD"
}
}