Terraform: provisioner couldn't be found - amazon-web-services

I have resource "aws_instance" "webserver" in my .tf file which contains provisioner "install-apache":
provider "aws" {
access_key = "ACCESS_KEY"
secret_key = "SECRET-KEY"
region = "us-east-1"
}
resource "aws_instance" "webserver" {
ami = "ami-b374d5a5"
instance_type = "t2.micro"
provisioner "install-apache" {
command = "apt-get install nginx"
}
}
After running terraform plan I've got an error:
* aws_instance.webserver: provisioner install-apache couldn't be found
According to the terraform documentation everything looks fine.

The provisioner value must be one of the following:
chef
file
local-exec
remote-exec
I believe in your case you want the remote-exec value
provider "aws" {
access_key = "ACCESS_KEY"
secret_key = "SECRET-KEY"
region = "us-east-1"
}
resource "aws_instance" "webserver" {
ami = "ami-b374d5a5"
instance_type = "t2.micro"
provisioner "remote-exec" {
inline = [
"apt-get install nginx"
]
}
}

Related

Running aws instance with connection using terraform

resource "aws_instance" "appserver1" {
ami = var.imageid
instance_type = var.instancetype
key_name = var.key
security_groups = [aws_security_group.allow_all.name]
connection {
user = "ubuntu"
private_key = file(var.privatekeypath)
}
provisioner "remote-exec" {
inline = [
"sudo apt-get update",
"sudo apt-get install tomcat7 -y"
]
}
}
"terraform validate" gives me the error:
Error: Missing required argument
on main.tf line 52, in resource "aws_instance" "appserver1":
52: connection {
The argument "host" is required, but no definition was found.
You have to specify connection details in the provisioner block. For example:
resource "aws_instance" "appserver1" {
ami = var.imageid
instance_type = var.instancetype
key_name = var.key
security_groups = [aws_security_group.allow_all.name]
provisioner "remote-exec" {
connection {
type = "ssh"
user = "ubuntu"
private_key = file(var.privatekeypath)
host = self.public_ip
}
inline = [
"sudo apt-get update",
"sudo apt-get install tomcat7 -y"
]
}
}
But in your case, using user_data would be more suited.
Instead of using a connection you can make use of userdataEC2 userdata options to install tomcat while launching the deploy instance.
I don't think giving a connection block inside the instance configuration will work

How to implement export values in terraform

I tried to create a simple example in AWS environments. In the beginning, I export 2 values:
export AWS_ACCESS_KEY_ID= something
export AWS_SECRET_ACCESS_KEY= something
After that, I wrote a simple code.
provider "aws" {
region = "us-east-1"
access_key = AWS_ACCESS_KEY_ID
secret_key = AWS_SECRET_ACCESS_KEY
}
resource "aws_instance" "example" {
ami = "ami-40d28157"
instance_type = "t2.micro"
tags = {
Name = "terraform-example"
}
}
When I define values instead of parameters AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY everything works OK, but with the provided code, I see the following error
on main.tf line 4, in provider "aws":
4: secret_key = AWS_SECRET_ACCESS_KEY
A reference to a resource type must be followed by at least one attribute
access, specifying the resource name.
Some ideas on how to solve this problem?
You don't have to do anything. As explained in the terraform authentication documentation for AWS provider, terraform will automatically use the credentials in that order:
Static credentials
Environment variables
Shared credentials/configuration file
CodeBuild, ECS, and EKS Roles
EC2 Instance Metadata Service (IMDS and IMDSv2)
So once you export your keys (make sure to export them correctly):
export AWS_ACCESS_KEY_ID="something"
export AWS_SECRET_ACCESS_KEY="something"
in your config file you would just use (exemplified in the docs):
provider "aws" {
region = "us-east-1"
}
resource "aws_instance" "example" {
ami = "ami-40d28157"
instance_type = "t2.micro"
tags = {
Name = "terraform-example"
}
}

Terraform - Multiple accounts with multiple environments (regions)

I am developing the infrastructure (IaC) I want to have in AWS with Terraform. To test, I am using an EC2 instance.
This code has to be able to be deployed across multiple accounts and **multiple regions (environments) per developer **. This is an example:
account-999
developer1: us-east-2
developer2: us-west-1
developerN: us-east-1
account-666:
Staging: us-east-1
Production: eu-west-2
I've created two .tfvars variables, account-999.env.tfvars and account-666.env.tfvars with the following content:
profile="account-999" and profile="account-666" respectively
This is my main.tf which contains the aws provider with the EC2 instance:
provider "aws" {
version = "~> 2.0"
region = "us-east-1"
profile = var.profile
}
data "aws_ami" "ubuntu" {
most_recent = true
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
owners = ["099720109477"]
}
resource "aws_instance" "web" {
ami = data.aws_ami.ubuntu.id
instance_type = "t3.micro"
tags = {
Name = "HelloWorld"
}
}
And the variable.tf file:
variable "profile" {
type=string
}
variable "region" {
description = "Region by developer"
type = map
default = {
developer1 = "us-west-2"
developer2 = "us-east-2"
developerN = "ap-southeast-1"
}
}
But I'm not sure if I'm managing it well. For example, the region variable only contains the values of the account-999 account. How can I solve that?
On the other hand, with this structure, would it be possible to implement modules?
You could use a provider alias to accomplish this. More info about provider aliases can be found here.
provider "aws" {
region = "us-east-1"
}
provider "aws" {
alias = "west"
region = "us-west-2"
}
resource "aws_instance" "foo" {
provider = aws.west
# ...
}
Another way to look at is, is by using terraform workspaces. Here is an example:
terraform workspace new account-999
terraform workspace new account-666
Then this is an example of your aws credentials file:
[account-999]
aws_access_key_id=xxx
aws_secret_access_key=xxx
[account-666]
aws_access_key_id=xxx
aws_secret_access_key=xxx
A reference to that account can be used within the provider block:
provider "aws" {
region = "us-east-1"
profile = "${terraform.workspace}"
}
You could even combine both methods!

How to pass the EC2 instance ID created by an aws_instance resource into a file and place it inside an EC2 instance using Terraform?

I want to pass the EC2 instance id created by terraform to a file sagemaker.config which I want to place inside the EC2 instance.
ec2_files/sagemaker.config
I want the instance id inside config file in the below format
email:abc#xyc.com
instanceid:i-0a4ca8714103432dxxx
ec2.tf
resource "aws_instance" "sagemaker_automation" {
instance_type = var.instance_type
ami = var.image_id
iam_instance_profile = aws_iam_instance_profile.ec2_profile.name
tags = {
Name = "Sagemaker Automation"
}
}
After doing some research, I found a way to use provisioner "file" and provisioner "local-exec" to pass an EC2 instance ID to a file and place it inside of an EC2 instance.
resource "tls_private_key" "example" {
algorithm = "RSA"
rsa_bits = 4096
}
resource "aws_key_pair" "generated_key" {
key_name = "cloudtls"
public_key = tls_private_key.example.public_key_openssh
}
resource "aws_instance" "automation" {
instance_type = var.instance_type
ami = var.image_id
iam_instance_profile = aws_iam_instance_profile.ec2_profile.name
key_name = aws_key_pair.generated_key.key_name
vpc_security_group_ids = var.security_group_ids
subnet_id = var.subnet_id
tags = {
Name = "Automation"
}
provisioner "local-exec" {
# the below command replaces the existing instance id in the file, if any
# and replaces it with the new instance id
command = "sed -i '/instanceid/d' ec2_files/sagemaker.config;echo 'instanceid:${aws_instance.automation.id}' >> ec2_files/sagemaker.config"
}
# this copies the files in the ec2_files/ directory to /home/ec2-user on the instance
provisioner "file" {
source = "ec2_files/"
destination = "/home/ec2-user"
}
# this is required to establish a connection and to copy files to the EC2 instance id from local disk
connection {
type = "ssh"
user = "ec2-user"
private_key = tls_private_key.example.private_key_pem
host = aws_instance.automation.private_ip
}
provisioner "remote-exec" {
inline = [
"ls -lrt",
"(crontab -l 2>/dev/null; echo '#reboot sleep 30 && /home/ec2-user/runpython.sh >> sagemakerautomation.log') | crontab -",
"chmod +x runpython.sh",
"cat sagemaker.config"
]
}
}

Terraform - Get a value from parameter store and pass to resource

We store our latest approved AMIs in AWS parameter store. When creating new instances with Terraform I would like to programatically get this AMI ID. I have a command to pull the AMI ID but I'm not sure how to use it with Terraform.
Here is the command I use to pull the AMI ID:
$(aws ssm get-parameter --name /path/to/ami --query 'Parameter.Value' --output text)
And here is my Terraform script:
resource "aws_instance" "nginx" {
ami = "ami-c58c1dd3" # pull value from parameter store
instance_type = "t2.micro"
#key_name = "${var.key_name}"
provisioner "remote-exec" {
inline = [
"sudo yum install nginx -y",
"sudo service nginx start"
]
}
}
How can I use the command to pull the AMI ID in the Terraform script?
You can use the aws_ssm_parameter data source to fetch the value of a parameter at runtime:
data "aws_ssm_parameter" "ami" {
name = "/path/to/ami"
}
resource "aws_instance" "nginx" {
ami = data.aws_ssm_parameter.ami.value # pull value from parameter store
instance_type = "t2.micro"
provisioner "remote-exec" {
inline = [
"sudo yum install nginx -y",
"sudo service nginx start"
]
}
}
However, a better approach might be to use the aws_ami data source to filter for the AMI you want more directly instead of pushing the AMI ID to SSM parameter store and then looking it up later. You can filter on a number of criteria including name, account owner and tags. Here's the example from the aws_instance resource documentation that is looking for the latest Ubuntu 20.04 AMI:
data "aws_ami" "ubuntu" {
most_recent = true
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
owners = ["099720109477"] # Canonical
}
resource "aws_instance" "web" {
ami = data.aws_ami.ubuntu.id
instance_type = "t2.micro"
tags = {
Name = "HelloWorld"
}
}
I recommend you use this approach since you already have the AMI ID stored at AWS SSM:
resource "aws_instance" "nginx" {
ami = data.aws_ssm_parameter.ami.value
instance_type = "t2.micro"
provisioner "remote-exec" {
inline = [
"sudo yum install nginx -y",
"sudo service nginx start"
]
}
}
data "aws_ssm_parameter" "ami" {
name = "/production/ami"
}