I'm trying to pass user data over a file so the code will look less clumsy, but having trouble. I've tried all the different combinations but nothing is working.
I went through the Terraform documentation, it doesn't give any special instructions for the path value.
Folder structure:
project1/env/dr/compute/main.tf
module "share_server" {
count = 2
source = "../../../../terraform_modules/modules/compute/"
ami = data.aws_ami.amazonlinux2.id
instance_type = "t3.micro"
availability_zone = data.aws_availability_zones.az.names[count.index]
subnet_id = data.aws_subnets.app_subnet.ids[count.index]
associate_public_ip_address = "false"
key_name = "app7"
vpc_security_group_ids = ["sg-08d38198dc153c410"]
instance_root_device_size = 20
kms_key_id = "ea88e727-e506-4530-b92f-2827d8f9c94e"
volume_type = "gp3"
platform = "linux"
backup = true
Environment = "dr"
server_role = "Application_Server"
server_component = "share_servers"
hostname = "app-dr-test-10"
tags = {
Name = "${local.instance_name}-share-${count.index}"
}
}
My ec2 module resides in the below folder structure.
project1/modules/compute/ec2.tf
project1/modules/computer/userdata/share_userdata.tpl
The ec2.tf code is below. I have removed the bottom half of the code so the post won't be too big to read.
resource "aws_instance" "ec2" {
ami = var.ami
instance_type = var.instance_type
availability_zone = var.availability_zone
subnet_id = var.subnet_id
associate_public_ip_address = var.associate_public_ip_address
user_data = templatefile("userdata/share_userdata.tpl",
{ hostname = var.hostname }
)
Error:
PS B:\PubOps\app7_terraform\environments\dr\compute> terraform apply
│ Error: Invalid function argument │
│ on ..\..\..\..\terraform_modules\modules\compute\main.tf line 10, in resource "aws_instance" "ec2":
│
10: user_data = templatefile("userdata/share_userdata.tpl",
│ 11: {
│ 12: hostname = var.hostname
│ 13: })
│
│ Invalid value for "path" parameter: no file exists at userdata/share_userdata.tpl; this function works only with files that are distributed as part of the
│ configuration source code, so if this file will be created by a resource in this configuration you must instead obtain this result from an attribute of that
│ resource. ╵ PS B:\PubOps\alfresco7_terraform\environments\dr\compute>
User data
#!/bin/bash
yum update -y
### hostname
sudo hostnamectl set-hostname $hostname
echo "127.0.0.1 $hostname
$hostname localhost4 localhost4.localdomain4" > /etc/hosts
echo "preserve_hostname: true" >> /etc/cloud/cloud.cfg
#EFS utility and mounting
yum install -y amazon-efs-utils
EOF
References:
The same code mentioned on GitHub, maybe it is working for the author, but not for me: https://github.com/kunduso/ec2-userdata-terraform/blob/add-userdata/ec2.tf
My goal is to set up user data and pass variables over the AWS parameter store as shown in the below URL, but I couldn't pass the basic setup.
https://skundunotes.com/2021/11/17/manage-sensitive-variables-in-aws-ec2-user-data-with-terraform/
I tried pointing the file like this: ./share_userdata.tpl.
I tried with absolute path b/project1/dr/compute/share_userdata.tpl.
I also tried giving $module.path/share_userdata.tpl.
None of them worked.
The error is rather clear:
no file exists at userdata/share_userdata.tpl
Thus you must ensure that in the folder where you execute templatefile("userdata/share_userdata.tpl" there is a subfolder called userdata and in that folder there is a file share_userdata.tpl.
You need to pass the full or relative path in your template file. You can try the below code for relative path e.g.:
resource "aws_instance" "ec2" {
ami = var.ami
instance_type = var.instance_type
availability_zone = var.availability_zone
subnet_id = var.subnet_id
associate_public_ip_address = var.associate_public_ip_address
user_data = templatefile("./userdata/share_userdata.tpl",
{ hostname = var.hostname }
)
Here ./ indicating current folder.
Assuming the ec2.tf and userdata folder exist in same path. for example: project1/modules/compute/.
Related
I'm trying to provision GCP resources through Terraform, but it's timing out while also throwing errors saying that resources already exist (I've looked in GCP and through the CLI, and the resources do not exist).
Error: Error waiting to create Image: Error waiting for Creating Image: timeout while waiting for state to become 'DONE' (last state: 'RUNNING', timeout: 15m0s)
│
│ with google_compute_image.student-image,
│ on main.tf line 29, in resource "google_compute_image" "student-image":
│ 29: resource "google_compute_image" "student-image" {
│
╵
╷
│ Error: Error creating Firewall: googleapi: Error 409: The resource 'projects/**-****-**********-******/global/firewalls/*****-*********-*******-*****-firewall' already exists, alreadyExists
│
│ with google_compute_firewall.default,
│ on main.tf line 46, in resource "google_compute_firewall" "default":
│ 46: resource "google_compute_firewall" "default" {
Some (perhaps salient) details:
I have previously provisioned these resources successfully using this same approach.
My billing account has since changed.
At another point, it was saying that the machine image existed (which, if it does, I can't see either in the console or the CLI).
I welcome any insights/suggestions.
EDIT
Including HCL; variables are defined in variables.tf and terraform.tfvars
provider google {
region = var.region
}
resource "google_compute_image" "student-image" {
name = var.google_compute_image_name
project = var.project
raw_disk {
source = var.google_compute_image_source
}
timeouts {
create = "15m"
update = "15m"
delete = "15m"
}
}
resource "google_compute_firewall" "default" {
name = "cloud-computing-project-image-firewall"
network = "default"
project = var.project
allow {
protocol = "tcp"
# 22: SSH
# 80: HTTP
ports = [
"22",
"80",
]
}
source_ranges = ["0.0.0.0/0"]
}
source = "./vm"
name = "workspace-vm"
project = var.project
image = google_compute_image.student-image.self_link
machine_type = "n1-standard-1"
}
There is a vm subdirectory with main.tf:
resource "google_compute_instance" "student_instance" {
name = var.name
machine_type = var.machine_type
zone = var.zone
project = var.project
boot_disk {
initialize_params {
image = var.image
size = var.disk_size
}
}
network_interface {
network = "default"
access_config {
}
}
labels = {
project = "machine-learning-on-the-cloud"
}
}
...and variables.tf:
variable name {}
variable project {}
variable zone {
default = "us-east1-b"
}
variable image {}
variable machine_type {}
variable disk_size {
default = 20
}
It sounds like maybe the resources were provisioned with Terraform but perhaps someone deleted them manually and so now your state file and what's actual doesn't match. terraform refresh might solve your problem.
I used this module to create a security group inside a VPC. One of the outputs is the security_group_id, but I'm getting this error:
│ Error: Unsupported attribute
│
│ on ecs.tf line 39, in resource "aws_ecs_service" "hello_world":
│ 39: security_groups = [module.app_security_group.security_group_id]
│ ├────────────────
│ │ module.app_security_group is a object, known only after apply
│
│ This object does not have an attribute named "security_group_id".
I need the security group for an ECS service:
resource "aws_ecs_service" "hello_world" {
name = "hello-world-service"
cluster = aws_ecs_cluster.container_service_cluster.id
task_definition = aws_ecs_task_definition.hello_world.arn
desired_count = 1
launch_type = "FARGATE"
network_configuration {
security_groups = [module.app_security_group.security_group_id]
subnets = module.vpc.private_subnets
}
load_balancer {
target_group_arn = aws_lb_target_group.loadbalancer_target_group.id
container_name = "hello-world-app"
container_port = 3000
}
depends_on = [aws_lb_listener.loadbalancer_listener, module.app_security_group]
}
I understand that I can only know the security group ID after it is created. That's why I added the depends_on part on the ECS stanza, but it kept returning the same error.
Update
I specified count as 1 on the app_security_group module and this is the error I'm getting now.
│ Error: Unsupported attribute
│
│ on ecs.tf line 39, in resource "aws_ecs_service" "hello_world":
│ 39: security_groups = module.app_security_group.security_group_id
│ ├────────────────
│ │ module.app_security_group is a list of object, known only after apply
│
│ Can't access attributes on a list of objects. Did you mean to access an attribute for a specific element of the list, or across all elements of the list?
Update II
This is the module declaration:
module "app_security_group" {
source = "terraform-aws-modules/security-group/aws//modules/web"
version = "3.17.0"
name = "${var.project}-web-sg"
description = "Security group for web-servers with HTTP ports open within VPC"
vpc_id = module.vpc.vpc_id
# ingress_cidr_blocks = module.vpc.public_subnets_cidr_blocks
ingress_cidr_blocks = ["0.0.0.0/0"]
}
I took a look at that module. The problem is that the version 3.17.0 of the module simply does not have the output of security_group_id. You are using a really old version.
The latest version from the site is 4.7.0, you would want to upgrade to this one. In fact, any version above 4.0.0 has the security_group_id, so you need to at least 4.0.0.
As you are using count, please try below.
network_configuration {
security_groups = [module.app_security_group[0].security_group_id]
subnets = module.vpc.private_subnets
}
I am trying to spin-up an AWS bastion host on AWS EC2. I am using the Terraform module provided by Guimove. I am getting stuck on the bastion_host_key_pair field. I need to provide a keypair that can be used to launch the EC2 template, but the bucket (aws_s3_bucket.bucket) that needs to contain the public key of the key pair gets created during the module, therefore the key isn't there when it tries to launch the instance and it fails. It feels like a chicken and egg scenario, so I am obviously doing something wrong. What am I doing wrong?
Error:
╷
│ Error: Error creating Auto Scaling Group: AccessDenied: You are not authorized to use launch template: lt-004b0af2895c684b3
│ status code: 403, request id: c6096e0d-dc83-4384-a036-f35b8ca292f8
│
│ with module.bastion.aws_autoscaling_group.bastion_auto_scaling_group,
│ on .terraform\modules\bastion\main.tf line 300, in resource "aws_autoscaling_group" "bastion_auto_scaling_group":
│ 300: resource "aws_autoscaling_group" "bastion_auto_scaling_group" {
│
╵
Terraform:
resource "tls_private_key" "bastion_host" {
algorithm = "RSA"
rsa_bits = 4096
}
resource "aws_key_pair" "bastion_host" {
key_name = "bastion_user"
public_key = tls_private_key.bastion_host.public_key_openssh
}
resource "aws_s3_bucket_object" "bucket_public_key" {
bucket = aws_s3_bucket.bucket.id
key = "public-keys/${aws_key_pair.bastion_host.key_name}.pub"
content = aws_key_pair.bastion_host.public_key
kms_key_id = aws_kms_key.key.arn
}
module "bastion" {
source = "Guimove/bastion/aws"
bucket_name = "${var.identifier}-ssh-bastion-bucket-${var.env}"
region = var.aws_region
vpc_id = var.vpc_id
is_lb_private = "false"
bastion_host_key_pair = aws_key_pair.bastion_host.key_name
create_dns_record = "false"
elb_subnets = var.public_subnet_ids
auto_scaling_group_subnets = var.public_subnet_ids
instance_type = "t2.micro"
tags = {
Name = "SSH Bastion Host - ${var.identifier}-${var.env}",
}
}
I had the same issue. The fix was to go into AWS Market place, accept the EULA and subscribe to the AMI I was trying to use.
I have the terraform file main.tf that used to create AWS resources:
provider "aws" {
region = "us-east-2"
}
resource "aws_instance" "example" {
ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
vpc_security_group_ids = [
aws_security_group.instance.id]
user_data = <<-EOF
#!/bin/bash
echo "Hello, World" > index.html
nohup busybox httpd -f -p "${var.server_port}" &
EOF
tags = {
Name = "terraform-example"
}
}
resource "aws_security_group" "instance" {
name = "terraform-example-instance"
ingress {
from_port = var.server_port
to_port = var.server_port
protocol = "tcp"
cidr_blocks = [
"0.0.0.0/0"]
}
}
resource "aws_security_group" "elb" {
name = "terraform-example-elb"
# Allow all outbound
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = [
"0.0.0.0/0"]
}
# Inbound HTTP from anywhere
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = [
"0.0.0.0/0"]
}
}
variable "server_port" {
description = "The port the server will use for HTTP requests"
type = number
default = 8080
}
variable "elb_port" {
description = "The port the server will use for HTTP requests"
type = number
default = 80
}
resource "aws_launch_configuration" "example" {
image_id = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
security_groups = [
aws_security_group.instance.id]
user_data = <<-EOF
#!/bin/bash
echo "Hello, World" > index.html
nohup busybox httpd -f -p "${var.server_port}" &
EOF
lifecycle {
create_before_destroy = true
}
}
resource "aws_elb" "example" {
name = "terraform-asg-example"
security_groups = [
aws_security_group.elb.id]
availability_zones = data.aws_availability_zones.all.names
health_check {
target = "HTTP:${var.server_port}/"
interval = 30
timeout = 3
healthy_threshold = 2
unhealthy_threshold = 2
}
# This adds a listener for incoming HTTP requests.
listener {
lb_port = var.elb_port
lb_protocol = "http"
instance_port = var.server_port
instance_protocol = "http"
}
}
resource "aws_autoscaling_group" "example" {
launch_configuration = aws_launch_configuration.example.id
availability_zones = data.aws_availability_zones.all.names
min_size = 2
max_size = 10
load_balancers = [
aws_elb.example.name]
health_check_type = "ELB"
tag {
key = "Name"
value = "terraform-asg-example"
propagate_at_launch = true
}
}
data "aws_availability_zones" "all" {}
output "public_ip" {
value = aws_instance.example.public_ip
description = "The public IP of the web server"
}
I successfully created the resources and then, destroyed them afterward. Now, I would like to create an AWS S3 remote backend for the project and appended the extra resources in the same file -
resource "aws_s3_bucket" "terraform_state" {
bucket = "terraform-up-and-running-state12345"
# Enable versioning so we can see the full revision history of our
# state files
versioning {
enabled = true
}
# Enable server-side encryption by default
server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
}
resource "aws_dynamodb_table" "terraform_locks" {
name = "terraform-up-and-running-locks"
billing_mode = "PAY_PER_REQUEST"
hash_key = "LockID"
attribute {
name = "LockID"
type = "S"
}
}
output "s3_bucket_arn" {
value = aws_s3_bucket.terraform_state.arn
description = "The ARN of the S3 bucket"
}
output "dynamodb_table_name" {
value = aws_dynamodb_table.terraform_locks.name
description = "The name of the DynamoDB table"
}
Then, I created a new file named backend.tf and add the code there:
terraform {
backend "s3" {
# Replace this with your bucket name!
bucket = "terraform-up-and-running-state12345"
key = "global/s3/terraform.tfstate"
region = "us-east-2"
# Replace this with your DynamoDB table name!
dynamodb_table = "terraform-up-and-running-locks"
encrypt = true
}
}
When I run the $ terraform init, I get the error below:
Initializing the backend...
Backend configuration changed!
Terraform has detected that the configuration specified for the backend
has changed. Terraform will now check for existing state in the backends.
╷
│ Error: Error loading state:
│ BucketRegionError: incorrect region, the bucket is not in 'us-east-2' region at endpoint ''
│ status code: 301, request id: , host id:
│
│ Terraform failed to load the default state from the "s3" backend.
│ State migration cannot occur unless the state can be loaded. Backend
│ modification and state migration has been aborted. The state in both the
│ source and the destination remain unmodified. Please resolve the
│ above error and try again.
I created the S3 bucket from the terminal:
$ aws s3api create-bucket --bucket terraform-up-and-running-state12345 --region us-east-2 --create-bucket-configuration LocationConstraint=us-east-2
Then, I tried again and receive the same error again. However, the bucket is already there:
I can't also run the destroy command as well:
$ terraform destroy
Acquiring state lock. This may take a few moments...
╷
│ Error: Error acquiring the state lock
│
│ Error message: 2 errors occurred:
│ * ResourceNotFoundException: Requested resource not found
│ * ResourceNotFoundException: Requested resource not found
│
│
│
│ Terraform acquires a state lock to protect the state from being written
│ by multiple users at the same time. Please resolve the issue above and try
│ again. For most commands, you can disable locking with the "-lock=false"
│ flag, but this is not recommended.
Can someone explain to me why is that and how to solve it?
Remove the .terraform folder and try terraform init
again
OR
error is because there's no S3 bucket created to sync with.
remove json object of s3 in .terraform/terraform.tfstate
remove the object generating remote backend run
terraform init
Can you please tell me a way to pass key in terraform for ec2 spin up.
variable "public_path" {
default = "D:\"
}
resource "aws_key_pair" "app_keypair" {
public_key = file(var.public_path)
key_name = "my_key"
}
resource "aws_instance" "web" {
ami = "ami-12345678"
instance_type = "t1.micro"
key_name = aws_key_pair.app_keypair
security_groups = [ "${aws_security_group.test_sg.id}" ]
}
Error : Invalid value for "path" parameter: failed to read D:".
Bash: tree
.
├── data
│ └── key
└── main.tf
1 directory, 2 files
Above is what my file system looks like. I'm not on windows. You were passing the directory and thinking that key_name means it would find the name of your key in that directory. But the fuction file() has no idea what key_name is. That is a value local to the aws_key_pair resource. So make sure you give the file fuction the full path to the file.
Look below for my code. You also passed aws_key_pair.app_keypair to your aws_instance resource. But that's an object that contains several properties. You need to specify which property you want to pass. In this case aws_key_pair.app_keypair.key_name. This will cause aws to stand up an EC2 and then look for a key pair with the name in your code. Then it associates them together.
provider aws {
profile = "myprofile"
region = "us-west-2"
}
variable "public_path" {
default = "./data/key"
}
resource "aws_key_pair" "app_keypair" {
public_key = file(var.public_path)
key_name = "somekeyname"
}
resource "aws_instance" "web" {
ami = "ami-12345678"
instance_type = "t1.micro"
key_name = aws_key_pair.app_keypair.key_name
}
Here is my plan output. You can see the key is getting injected correctly. This is the same key in the terraform docs, so it's safe to put in here.
Terraform will perform the following actions:
# aws_instance.web will be created
+ resource "aws_instance" "web" {
<...ommitted for stack overflow brevity...>
+ key_name = "somekeyname"
<...ommitted for stack overflow brevity...>
}
# aws_key_pair.app_keypair will be created
+ resource "aws_key_pair" "app_keypair" {
+ arn = (known after apply)
+ fingerprint = (known after apply)
+ id = (known after apply)
+ key_name = "somekeyname"
+ key_pair_id = (known after apply)
+ public_key = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQD3F6tyPEFEzV0LX3X8BsXdMsQz1x2cEikKDEY0aIj41qgxMCP/iteneqXSIFZBp5vizPvaoIR3Um9xK7PGoW8giupGn+EPuxIA4cDM4vzOqOkiMPhz5XK0whEjkVzTo4+S0puvDZuwIsdiW9mxhJc7tgBNL0cYlWSYVkz4G/fslNfRPW5mYAM49f4fhtxPb5ok4Q2Lg9dPKVHO/Bgeu5woMc7RY0p1ej6D4CKFE6lymSDJpW0YHX/wqE9+cfEauh7xZcG0q9t2ta6F6fmX0agvpFyZo8aFbXeUBr7osSCJNgvavWbM/06niWrOvYX2xwWdhXmXSrbX8ZbabVohBK41 email#example.com"
}
Plan: 2 to add, 0 to change, 0 to destroy.