Using Terraform to create an AWS EC2 bastion - amazon-web-services

I am trying to spin-up an AWS bastion host on AWS EC2. I am using the Terraform module provided by Guimove. I am getting stuck on the bastion_host_key_pair field. I need to provide a keypair that can be used to launch the EC2 template, but the bucket (aws_s3_bucket.bucket) that needs to contain the public key of the key pair gets created during the module, therefore the key isn't there when it tries to launch the instance and it fails. It feels like a chicken and egg scenario, so I am obviously doing something wrong. What am I doing wrong?
Error:
╷
│ Error: Error creating Auto Scaling Group: AccessDenied: You are not authorized to use launch template: lt-004b0af2895c684b3
│ status code: 403, request id: c6096e0d-dc83-4384-a036-f35b8ca292f8
│
│ with module.bastion.aws_autoscaling_group.bastion_auto_scaling_group,
│ on .terraform\modules\bastion\main.tf line 300, in resource "aws_autoscaling_group" "bastion_auto_scaling_group":
│ 300: resource "aws_autoscaling_group" "bastion_auto_scaling_group" {
│
╵
Terraform:
resource "tls_private_key" "bastion_host" {
algorithm = "RSA"
rsa_bits = 4096
}
resource "aws_key_pair" "bastion_host" {
key_name = "bastion_user"
public_key = tls_private_key.bastion_host.public_key_openssh
}
resource "aws_s3_bucket_object" "bucket_public_key" {
bucket = aws_s3_bucket.bucket.id
key = "public-keys/${aws_key_pair.bastion_host.key_name}.pub"
content = aws_key_pair.bastion_host.public_key
kms_key_id = aws_kms_key.key.arn
}
module "bastion" {
source = "Guimove/bastion/aws"
bucket_name = "${var.identifier}-ssh-bastion-bucket-${var.env}"
region = var.aws_region
vpc_id = var.vpc_id
is_lb_private = "false"
bastion_host_key_pair = aws_key_pair.bastion_host.key_name
create_dns_record = "false"
elb_subnets = var.public_subnet_ids
auto_scaling_group_subnets = var.public_subnet_ids
instance_type = "t2.micro"
tags = {
Name = "SSH Bastion Host - ${var.identifier}-${var.env}",
}
}

I had the same issue. The fix was to go into AWS Market place, accept the EULA and subscribe to the AMI I was trying to use.

Related

How can I add key_name to the resource of an EC2 instance?

In terraform, how can I specify EC2 private/public key pair, when launching a new EC2 instance?
I am following https://stackoverflow.com/a/73351869 and https://stackoverflow.com/a/64287520 to add key_name to a resource in the following main.tf:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.16"
}
}
required_version = ">= 1.2.0"
}
provider "aws" {
region = "us-east-1"
}
variable "public_path" {
default = "/path/to/MyKeyPair.pem"
}
resource "aws_key_pair" "app_keypair" {
public_key = file(var.public_path)
key_name = "somekeyname"
}
resource "aws_instance" "app_server" {
ami = "ami-052efd3df9dad4825"
instance_type = "t2.micro"
vpc_security_group_ids = ["sg-xxxxx"]
key_name = aws_key_pair.app_keypair.key_name
tags = {
Name = "ExampleAppServerInstance"
}
}
but
$ terraform apply
Plan: 2 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
aws_key_pair.app_keypair: Creating...
╷
│ Error: error importing EC2 Key Pair (somekeyname): InvalidParameterValue: Value for parameter PublicKeyMaterial is invalid. Length exceeds maximum of 2048.
│ status code: 400, request id: 72425610-202e-42ce-98b4-a8dce5fef694
│
│ with aws_key_pair.app_keypair,
│ on main.tf line 21, in resource "aws_key_pair" "app_keypair":
│ 21: resource "aws_key_pair" "app_keypair" {
│
Why is it and what can I do about it? Thanks.
See the "NOTE:" at the bottom of the aws_key_pair documentation.
The AWS API does not include the public key in the response, so terraform apply will attempt to replace the key pair. There is currently no supported workaround for this limitation.
And I can tell by your comment that you know the create-key-pair output key material is the private key, which is correct and can be verified in the Create key pairs documentation.
Use the create-key-pair command as follows to generate the key pair and to save the private key to a .pem file.
So if you want to create a key locally and use this terrform resource, you'll need to use a method that leaves you with public key material, like openssh. I'd recommend following the Create a key pair using a third-party tool and import the public key to Amazon EC2 section, if you intend to do so.

How to fix 403 error when applying Terraform?

I created a new EC2 instance on AWS.
I am trying to create a Terraform on the AWS server and I'm getting an error.
I don't have the AMI previously created, so I'm not sure if this is the issue.
I checked my keypair and ensured it is correct.
I also checked the API details and they are correct too. I'm using a college AWS App account where the API details are the same for all users. Not sure if that would be an issue.
This is the error I'm getting after running terraform apply:
Error: error configuring Terraform AWS Provider: error validating provider credentials: error calling sts:GetCallerIdentity: InvalidClientTokenId: The security token included in the request is invalid.
│ status code: 403, request id: be2bf9ee-3aa4-401a-bc8b-f15c8a1e63d0
│
│ with provider["registry.terraform.io/hashicorp/aws"],
│ on main.tf line 10, in provider "aws":
│ 10: provider "aws" {
My main.tf file:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.27"
}
}
required_version = ">= 0.14.9"
}
provider "aws" {
profile = "default"
region = "eu-west-1"
}
resource "aws_instance" "app_server" {
ami = "ami-04505e74c0741db8d"
instance_type = "t2.micro"
key_name = "<JOEY'S_KEYPAIR>"
tags = {
Name = "joey_terraform"
}
}
Credentials:
AWS Access Key ID [****************LRMC]:
AWS Secret Access Key [****************6OO3]:
Default region name [eu-west-1]:
Default output format [None]:

Terraform/GCP Timeout (and Resources Already Exist)

I'm trying to provision GCP resources through Terraform, but it's timing out while also throwing errors saying that resources already exist (I've looked in GCP and through the CLI, and the resources do not exist).
Error: Error waiting to create Image: Error waiting for Creating Image: timeout while waiting for state to become 'DONE' (last state: 'RUNNING', timeout: 15m0s)
│
│ with google_compute_image.student-image,
│ on main.tf line 29, in resource "google_compute_image" "student-image":
│ 29: resource "google_compute_image" "student-image" {
│
╵
╷
│ Error: Error creating Firewall: googleapi: Error 409: The resource 'projects/**-****-**********-******/global/firewalls/*****-*********-*******-*****-firewall' already exists, alreadyExists
│
│ with google_compute_firewall.default,
│ on main.tf line 46, in resource "google_compute_firewall" "default":
│ 46: resource "google_compute_firewall" "default" {
Some (perhaps salient) details:
I have previously provisioned these resources successfully using this same approach.
My billing account has since changed.
At another point, it was saying that the machine image existed (which, if it does, I can't see either in the console or the CLI).
I welcome any insights/suggestions.
EDIT
Including HCL; variables are defined in variables.tf and terraform.tfvars
provider google {
region = var.region
}
resource "google_compute_image" "student-image" {
name = var.google_compute_image_name
project = var.project
raw_disk {
source = var.google_compute_image_source
}
timeouts {
create = "15m"
update = "15m"
delete = "15m"
}
}
resource "google_compute_firewall" "default" {
name = "cloud-computing-project-image-firewall"
network = "default"
project = var.project
allow {
protocol = "tcp"
# 22: SSH
# 80: HTTP
ports = [
"22",
"80",
]
}
source_ranges = ["0.0.0.0/0"]
}
source = "./vm"
name = "workspace-vm"
project = var.project
image = google_compute_image.student-image.self_link
machine_type = "n1-standard-1"
}
There is a vm subdirectory with main.tf:
resource "google_compute_instance" "student_instance" {
name = var.name
machine_type = var.machine_type
zone = var.zone
project = var.project
boot_disk {
initialize_params {
image = var.image
size = var.disk_size
}
}
network_interface {
network = "default"
access_config {
}
}
labels = {
project = "machine-learning-on-the-cloud"
}
}
...and variables.tf:
variable name {}
variable project {}
variable zone {
default = "us-east1-b"
}
variable image {}
variable machine_type {}
variable disk_size {
default = 20
}
It sounds like maybe the resources were provisioned with Terraform but perhaps someone deleted them manually and so now your state file and what's actual doesn't match. terraform refresh might solve your problem.

Terraform Error refreshing state: BucketRegionError: incorrect region

I have the terraform file main.tf that used to create AWS resources:
provider "aws" {
region = "us-east-2"
}
resource "aws_instance" "example" {
ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
vpc_security_group_ids = [
aws_security_group.instance.id]
user_data = <<-EOF
#!/bin/bash
echo "Hello, World" > index.html
nohup busybox httpd -f -p "${var.server_port}" &
EOF
tags = {
Name = "terraform-example"
}
}
resource "aws_security_group" "instance" {
name = "terraform-example-instance"
ingress {
from_port = var.server_port
to_port = var.server_port
protocol = "tcp"
cidr_blocks = [
"0.0.0.0/0"]
}
}
resource "aws_security_group" "elb" {
name = "terraform-example-elb"
# Allow all outbound
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = [
"0.0.0.0/0"]
}
# Inbound HTTP from anywhere
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = [
"0.0.0.0/0"]
}
}
variable "server_port" {
description = "The port the server will use for HTTP requests"
type = number
default = 8080
}
variable "elb_port" {
description = "The port the server will use for HTTP requests"
type = number
default = 80
}
resource "aws_launch_configuration" "example" {
image_id = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
security_groups = [
aws_security_group.instance.id]
user_data = <<-EOF
#!/bin/bash
echo "Hello, World" > index.html
nohup busybox httpd -f -p "${var.server_port}" &
EOF
lifecycle {
create_before_destroy = true
}
}
resource "aws_elb" "example" {
name = "terraform-asg-example"
security_groups = [
aws_security_group.elb.id]
availability_zones = data.aws_availability_zones.all.names
health_check {
target = "HTTP:${var.server_port}/"
interval = 30
timeout = 3
healthy_threshold = 2
unhealthy_threshold = 2
}
# This adds a listener for incoming HTTP requests.
listener {
lb_port = var.elb_port
lb_protocol = "http"
instance_port = var.server_port
instance_protocol = "http"
}
}
resource "aws_autoscaling_group" "example" {
launch_configuration = aws_launch_configuration.example.id
availability_zones = data.aws_availability_zones.all.names
min_size = 2
max_size = 10
load_balancers = [
aws_elb.example.name]
health_check_type = "ELB"
tag {
key = "Name"
value = "terraform-asg-example"
propagate_at_launch = true
}
}
data "aws_availability_zones" "all" {}
output "public_ip" {
value = aws_instance.example.public_ip
description = "The public IP of the web server"
}
I successfully created the resources and then, destroyed them afterward. Now, I would like to create an AWS S3 remote backend for the project and appended the extra resources in the same file -
resource "aws_s3_bucket" "terraform_state" {
bucket = "terraform-up-and-running-state12345"
# Enable versioning so we can see the full revision history of our
# state files
versioning {
enabled = true
}
# Enable server-side encryption by default
server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
}
resource "aws_dynamodb_table" "terraform_locks" {
name = "terraform-up-and-running-locks"
billing_mode = "PAY_PER_REQUEST"
hash_key = "LockID"
attribute {
name = "LockID"
type = "S"
}
}
output "s3_bucket_arn" {
value = aws_s3_bucket.terraform_state.arn
description = "The ARN of the S3 bucket"
}
output "dynamodb_table_name" {
value = aws_dynamodb_table.terraform_locks.name
description = "The name of the DynamoDB table"
}
Then, I created a new file named backend.tf and add the code there:
terraform {
backend "s3" {
# Replace this with your bucket name!
bucket = "terraform-up-and-running-state12345"
key = "global/s3/terraform.tfstate"
region = "us-east-2"
# Replace this with your DynamoDB table name!
dynamodb_table = "terraform-up-and-running-locks"
encrypt = true
}
}
When I run the $ terraform init, I get the error below:
Initializing the backend...
Backend configuration changed!
Terraform has detected that the configuration specified for the backend
has changed. Terraform will now check for existing state in the backends.
╷
│ Error: Error loading state:
│ BucketRegionError: incorrect region, the bucket is not in 'us-east-2' region at endpoint ''
│ status code: 301, request id: , host id:
│
│ Terraform failed to load the default state from the "s3" backend.
│ State migration cannot occur unless the state can be loaded. Backend
│ modification and state migration has been aborted. The state in both the
│ source and the destination remain unmodified. Please resolve the
│ above error and try again.
I created the S3 bucket from the terminal:
$ aws s3api create-bucket --bucket terraform-up-and-running-state12345 --region us-east-2 --create-bucket-configuration LocationConstraint=us-east-2
Then, I tried again and receive the same error again. However, the bucket is already there:
I can't also run the destroy command as well:
$ terraform destroy
Acquiring state lock. This may take a few moments...
╷
│ Error: Error acquiring the state lock
│
│ Error message: 2 errors occurred:
│ * ResourceNotFoundException: Requested resource not found
│ * ResourceNotFoundException: Requested resource not found
│
│
│
│ Terraform acquires a state lock to protect the state from being written
│ by multiple users at the same time. Please resolve the issue above and try
│ again. For most commands, you can disable locking with the "-lock=false"
│ flag, but this is not recommended.
Can someone explain to me why is that and how to solve it?
Remove the .terraform folder and try terraform init
again
OR
error is because there's no S3 bucket created to sync with.
remove json object of s3 in .terraform/terraform.tfstate
remove the object generating remote backend run
terraform init

UnauthorizedOperation on terraform apply. How to run the following AWS config?

I want to deploy an infrastructure on AWS using terraform. This is the main.tf config file:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.27"
}
}
required_version = ">= 0.14.9"
}
provider "aws" {
profile = "default"
region = "us-west-2"
}
resource "aws_instance" "app_server" {
ami = "ami-830c94e3"
instance_type = "t2.micro"
tags = {
Name = "ExampleAppServerInstance"
}
}
AWS config file ~/.aws/config,:
[default]
region = us-east-1
[humboi]
region = us-east-1
Running terraform apply and entering "yes" gives:
aws_instance.app_server: Creating...
╷
│ Error: Error launching source instance: UnauthorizedOperation: You are not authorized to perform this operation. Encoded authorization failure message: r8hvTFNQzGA7k309BxQ9OYRxCaCH-0wwYvhAzbjEt77PsyOYyWItWNrOPUW4Z1CIzm8A6x6euBuSZsE8uSfb3YdPuzLXttHT3DS9IJsDs0ilX0Vxtu1OZ3nSCBowuylMuLEXY8VdaA35Hb7CaLb-ktQwb_ke0Pku-Uh2Vi_cwsYwAdXdGVeTETkiuErZ3tAU37f5DyZkaL4dVgPMynjRI3-GW0P63WJxcZVTkfNcNzuTx6PQfdv-YydIdUOSAS-RUVqK6ewiX-Mz4S0GwAaIFeJ_4SoIQVjogbzYYBC0bI4-sBSyVmySGuxNF6x-BOU0Zt2-po1mwEiPaDBVL9aOt6k_eZKMbYM9Ef8qQRcxnSLWOCiHuw6LVbmPJzaDQRFNZ2eO11Fa2oOcu8JMEOQjOtPkibQNAdO_5LZWAnc6Ye2-Ukt2_folTKN6TH6v1hmwsLAO7uGL60gQ-n9iBfCIqEE_6gfImsdbOptgz-IRtTrz5a8bfLOBVfd9oNjKGXQoA2ZKhM35m1ML1DQKY8LcDv0aULkGzoM6bRYoq1UkJBYuF-ShamtSpSlzpd4KDXztpxUdb496FR4MdOoHgS04W_3WXoN-hb_lG-Wgbkv7CEWMv2pNhBCRipBgUUw3QK-NApkeTxxJXy9vFQ4fTZQanEIQa_Bxxg
│ status code: 403, request id: 0c1f14ec-b5f4-4a3f-bf1f-40be4cf370fc
│
│ with aws_instance.app_server,
│ on main.tf line 17, in resource "aws_instance" "app_server":
│ 17: resource "aws_instance" "app_server" {
│
╵
The error is that the Operation was Unauthorized. What's the cause of the unauthorized operation if I have the ~/.aws/config and also the ~/.aws/credentials?
I have this happen when I change my backend configuration without deleting .terraform. I believe terraform caches credentials in .terraform. If you delete that directory, it will regenerate it and it might work for you.
Also, make sure you restart your machine after setting environment variables for aws.
The IAM User which you have created
doesn't have Admin access or EC2 FULL ACESS
so enable it and try it again..