vpc_id argument is not expected here - amazon-web-services

I’m new to using modules in Terraform and I’m getting an error from my main.tf in my root module, saying “an argument vpc_id is not expected here” and this error is occurring for my “sg” module block at the bottom.
Here is my main.tf in my root module
access_key = var.my_access_key
secret_key = var.my_secret_key
region = var.region
}
provider "random" {
}
resource "random_id" "prefix" {
byte_length = 8
}
module "ec2" {
source = "./modules/ec2"
infra_env = var.infra_env
public_ssh_key = var.public_ssh_key
allow_rdp = module.sg.allow_rdp.id
allow_winrm = module.sg.allow_winrm.id
}
module "iam" {
source = "./modules/iam"
infra_env = var.infra_env
}
module "s3" {
source = "./modules/s3"
infra_env = var.infra_env
}
module "sg" {
source = "./modules/sg"
infra_env = var.infra_env
vpc_id = module.vpc.vpc_1.id
}
module "vpc" {
source = "./modules/vpc"
infra_env = var.infra_env
}
Here is the Main.tf of my “SG” module- I thought I only had to put “module.vpc.vpc_1.id” to get the input from that module
terraform {
required_version = ">= 1.1.5"
}
module "vpc" {
source = "../vpc"
infra_env = var.infra_env
}
# Allow WinRM to set adminstrator password
resource "aws_security_group" "allow_winrm" {
name = "allow_winrm"
description = "Allow access the instances via WinRM over HTTP and HTTPS"
vpc_id = module.vpc.vpc_1.id
ingress {
description = "Access the instances via WinRM over HTTP and HTTPS"
from_port = 5985
to_port = 5986
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "${var.infra_env}-allow-winrm"
}
}
# Allow RDP connectvity to EC2 instances
resource "aws_security_group" "allow_rdp" {
name = "allow_rdp"
description = "Allow access the instances via RDP"
vpc_id = module.vpc.vpc_1.id
ingress {
description = "Allow access the instances via RDP"
from_port = 3389
to_port = 3389
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "${var.infra_env}-allow-rdp"
}
}
Here are the outputs for my VPC module, located in my VPC module:
output "subnet_1" {
value = aws_subnet.subnet_1
}
output "vpc_1" {
value = aws_vpc.vpc_1.id
}
output "gw_1" {
value = aws_internet_gateway.gw_1
}

There are a couple of problems with your code. In the outputs, you have written this:
output "vpc_1" {
value = aws_vpc.vpc_1.id
}
This means that the output value will already provide the VPC ID you want. It is enough to reference it only by name: module.vpc.vpc_1. More information about referencing module outputs is available in [1].
Second issue here is that you are trying to reference an output of a child module (VPC) in another child module (SG):
vpc_id = module.vpc.vpc_1.id
However, it would be enough to specify a variable in the child module (i.e., SG). For example, you could define a variable like this:
variable "vpc_id" {
description = "VPC ID."
type = string
}
Then, you would just write the above as:
vpc_id = var.vpc_id
When calling the SG module, you would use the following:
module "sg" {
source = "./modules/sg"
infra_env = var.infra_env
vpc_id = module.vpc.vpc_1
}
Note that the same change with var.vpc_id should be done everywhere in the SG module where module.vpc.vpc_1.id was referenced.
[1] https://www.terraform.io/language/values/outputs#accessing-child-module-outputs

Related

Connection time out when using Terraform

I tried to create instance from a subnet and vpc id but am having issue with the provision remote exec.The purpose of this is to create 2 public subnets(eu-west-1a) and 2 private subnets(eu-west-1b) and use the subnet and vpc id in it to create an instance and then ssh and install nginx. I am not sure how to resolve this and unfortunately am not expert in Terraform so guidance is needed here. When I tried to also ssh it using the command prompt, it is saying connection timed out. The port is open in security group port 22
╷
│
Error: remote-exec provisioner error
│
│ with aws_instance.EC2InstanceCreate,
│ on main_ec2.tf line 11, in resource "aws_instance" "EC2InstanceCreate":
│ 11: provisioner "remote-exec" {
│
│ timeout - last error: dial tcp 54.154.137.10:22: i/o timeout
╵
[1enter image description here
My code below :
`# Server Definition
resource "aws_instance" "EC2InstanceCreate" {
ami = "${var.aws_ami}"
instance_type = "${var.server_type}"
key_name = "${var.target_keypairs}"
subnet_id = "${var.target_subnet}"
provisioner "remote-exec" {
connection {
type = "ssh"
host = "${self.public_ip}"
user = "centos"
private_key = "${file("/home/michael/cs-104-michael/lesson6/EC2Tutorial.pem")}"
timeout = "5m"
}
inline = [
"sudo yum -y update",
"sudo yum -y install nginx",
"sudo service nginx start",
"sudo yum -y install wget, unzip",
]
}
tags = {
Name = "cs-104-lesson6-michael"
Environment = "TEST"
App = "React App"
}
}
output "pub_ip" {
value = ["${aws_instance.EC2InstanceCreate.public_ip}"]
depends_on = [aws_instance.EC2InstanceCreate]
}`
security group config :
# Create security group for webserver
resource "aws_security_group" "webserver_sg" {
name = "sg_ws_name"
vpc_id = "${var.target_vpc}"
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
description = "HTTP"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
description = "HTTP"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}
tags = {
Name = "Security Group VPC devmind"
Project = "demo-assignment"
}
}
subnet code :
resource "aws_subnet" "public-subnet" {
vpc_id = "${aws_vpc.default.id}"
cidr_block = "${var.public_subnet_2a_cidr}"
availability_zone = "eu-west-1a"
map_public_ip_on_launch = true
tags = {
Name = "Web Public subnet 1"
}
}
resource "aws_subnet" "public-subnet2" {
vpc_id = "${aws_vpc.default.id}"
cidr_block = "${var.public_subnet_2b_cidr}"
availability_zone = "eu-west-1a"
map_public_ip_on_launch = true
tags = {
Name = "Web Public subnet 2"
}
}
# Define private subnets
resource "aws_subnet" "private-subnet" {
vpc_id = "${aws_vpc.default.id}"
cidr_block = "${var.private_db_subnet_2a_cidr}"
availability_zone = "eu-west-1b"
map_public_ip_on_launch = false
tags = {
Name = "App Private subnet 1"
}
}
resource "aws_subnet" "private-subnet2" {
vpc_id = "${aws_vpc.default.id}"
cidr_block = "${var.private_db_subnet_2b_cidr}"
availability_zone = "eu-west-1b"
map_public_ip_on_launch = false
tags = {
Name = "App Private subnet 2"
}
}
vpc code :
# Define our VPC
resource "aws_vpc" "default" {
cidr_block = "${var.vpc_cidr}"
enable_dns_hostnames = true
tags = {
Name = "Devops POC VPC"
}
}
Internet gateway included code :
# Internet Gateway
resource "aws_internet_gateway" "gw" {
vpc_id = "${aws_vpc.default.id}"
tags = {
name = "VPC IGW"
}
}
You are not providing vpc_security_group_ids for your instance:
vpc_security_group_ids = [aws_security_group.webserver_sg.id]
There could be many other issues, such as incorrectly setup VPC which is not shown.

How to launch an EFS in the default VPC?

I am trying to launch an EFS file system in my default VPC. I am able to create the EFS but not able to mount the target in all subnets. In subnet_id I not sure how to pass the value of all the subnet ids of default VPC. Below is my Terraform code:
$ cat ec2.tf
provider "aws" {
[enter image description here][1]region = "ap-south-1"
profile = "saumikhp"
}
data "aws_vpc" "default" {
default = true
}
data "aws_subnet_ids" "example" {
vpc_id = var.vpc_id
}
data "aws_subnet" "example" {
for_each = data.aws_subnet_ids.example.ids
id = each.value
}
resource "aws_key_pair" "key" {
key_name = "mykey12345"
public_key = file("mykey12345.pub")
}
resource "aws_security_group" "web-sg" {
name = "web-sg"
description = "Allow port 22 and 80"
vpc_id = "vpc-18819d70"
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 2049
to_port = 2049
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "web-sg"
}
}
resource "aws_instance" "myinstance" {
ami = "ami-0447a12f28fddb066"
instance_type = "t2.micro"
key_name = "mykey12345"
security_groups = ["web-sg"]
connection {
type = "ssh"
user = "ec2-user"
private_key = file("mykey12345")
host = aws_instance.myinstance.public_ip
}
provisioner "remote-exec" {
inline = [
"sudo yum install httpd php git -y",
"sudo systemctl restart httpd",
"sudo systemctl enable httpd",
]
}
tags = {
Name = "SaumikOS"
}
}
resource "aws_efs_file_system" "efs" {
creation_token = "efs"
performance_mode = "generalPurpose"
throughput_mode = "bursting"
encrypted = "true"
tags = {
Name = "EfsExample"
}
}
resource "aws_efs_mount_target" "efs-mt" {
depends_on = [
aws_instance.myinstance,
]
for_each = data.aws_subnet_ids.example.ids
subnet_id = each.value
file_system_id = "aws_efs_file_system.efs.id"
security_groups = ["aws_security_group.web-sg.id"]
}
Error after running terraform apply
You can get the subnets from the default VPC by using a combination of the aws_vpc and aws_subnet_ids data sources.
data "aws_vpc" "default" {
default = true
}
data "aws_subnet_ids" "example" {
vpc_id = var.vpc_id
}
You can then create an EFS mount target in each of the subnets by looping over these (each mount target only takes a single subnet_id):
resource "aws_efs_mount_target" "efs-mt" {
for_each = data.aws_subnet_ids.example.ids
file_system_id = aws_efs_file_system.efs.id
subnet_id = each.value
security_groups = [aws_security_group.web-sg.id]
}

AWS SG self reference resolving different environments

I want to make it more modular for different env (Dev,UAT, PROD) in that case i believe i should use 'name' of the SG("App${local.environment}sec_group") or only Sec_group?. here will it be able to resolve source_security_group_id ? main.tf file:-
resource "aws_security_group" "sec_group" {
name = "App${local.environment}sec_group"
vpc_id = "${local.vpc_id}"
} resource "aws_security_group_rule" "sec_group_allow_1865" {
type = "ingress"
from_port = 1865
to_port = 1865
protocol = "tcp"
security_group_id = "${aws_security_group.sec_group.id}"
source_security_group_id = "${aws_security_group.App${local.environment}sec_group.id}" '''
}
Variable.tf file:-
environment = "${lookup(var.ws_to_environment_map, terraform.workspace, var.default_environment)}"
vpc_id = "${lookup(var.ws_to_vpc_map, terraform.workspace, var.default_environment)}"
variable "default_environment" {
default = "dev"
}
variable "ws_to_vpc_map" {
type = "map"
default = {
dev = "vpc-03a05d67831e1ff035"
uat = ""
prod = ""
}
}
variable "ws_to_environment_map" {
type = "map"
default = {
dev = "DEV"
uat = "UAT"
prod = "PROD"
}
}
Here you could use
source_security_group_id = "${aws_security_group.sec_group.id}"
instead of
source_security_group_id = "${aws_security_group.App${local.environment}sec_group.id}"
aws_security_group.sec_group refers to the security group resource created with the name "sec_group"(resource "aws_security_group" "sec_group") and aws_security_group.sec_group.id would get its id.

Route 53 - Changing A type to AAAA

I have terraform script that works well for type=A record DNS. So when I execute this:
data "aws_acm_certificate" "this" {
domain = "*.${var.CERTIFICATE_DOMAIN}"
}
resource "aws_security_group" "this" {
name = "${var.SERVICE}-${var.ENV}-${var.REGION}-allow_all"
description = "Allow all inbound traffic"
vpc_id = "${data.aws_vpc.this.id}"
ingress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
data "aws_subnet_ids" "this" {
vpc_id = "${data.aws_vpc.this.id}"
tags {
Service = "external"
}
}
data "aws_security_groups" "ecs" {
tags {
Environment = "${var.VPC_ENV}"
Region = "${var.REGION}"
}
filter {
name = "vpc-id"
values = ["${data.aws_vpc.this.id}"]
}
filter {
name = "group-name"
values = ["${var.ENV}-api-internal-ecs-host*-sg"]
}
}
resource "aws_security_group_rule" "lb2ecs" {
from_port = 32768
to_port = 65535
protocol = "tcp"
security_group_id = "${data.aws_security_groups.ecs.ids[0]}"
source_security_group_id = "${aws_security_group.this.id}"
type = "ingress"
}
resource "aws_alb" "https" {
name = "${var.SERVICE}-${var.ENV}-alb"
internal = false
load_balancer_type = "application"
security_groups = ["${aws_security_group.this.id}"]
subnets = ["${data.aws_subnet_ids.this.ids}"]
}
data "aws_route53_zone" "this" {
name = "${var.CERTIFICATE_DOMAIN}."
private_zone = false
}
resource "aws_route53_record" "www" {
zone_id = "${data.aws_route53_zone.this.zone_id}"
name = "${var.SERVICE}-${var.ENV}-${var.REGION}.${var.CERTIFICATE_DOMAIN}"
type = "A"
alias {
name = "${aws_alb.https.dns_name}"
zone_id = "${aws_alb.https.zone_id}"
evaluate_target_health = true
}
}
resource "aws_alb_target_group" "https" {
name = "${var.SERVICE}-${var.ENV}-https"
port = 3000
protocol = "HTTP"
vpc_id = "${data.aws_vpc.this.id}"
health_check {
path = "/health"
}
}
resource "aws_alb_listener" "https" {
load_balancer_arn = "${aws_alb.https.arn}"
port = "443"
protocol = "HTTPS"
ssl_policy = "ELBSecurityPolicy-2015-05"
certificate_arn = "${data.aws_acm_certificate.this.arn}"
default_action {
type = "forward"
target_group_arn = "${aws_alb_target_group.https.arn}"
}
}
It properly creates new HTTPS endpoint and I can easily put service behind it (by linking the aws_alb_target_group.https with ECS service)
I need to add IPv6 support, so I was thinking - what about just changing the A type to AAAA in resource "aws_route53_record" "www". The terraform was executed fine stating that it was changed, in Route 53 I can see the record looks exactly the same as before but it has AAAA type, but the service is not reachable anymore.
In Route 53, I can see that there is ALIAS that looks like this: someservice-test-alb-1395527311.eu-central-1.elb.amazonaws.com. And I can reach the service with it by HTTPS from public internet. However the "nice" endpoint that was working before dont work anymore. Also pinging the URL do not receive any IP anymore.
Am I missing something?
AAAA records, you need to first enable your VPC with IPV6 support. By default its not enabled.
Once you are done than you can follow the guideline in below blog to enabled IPV6 for teraform.
https://medium.com/#mattias.holmlund/setting-up-ipv6-on-amazon-with-terraform-e14b3bfef577

terraform : Error creating Security Group: UnauthorizedOperation: You are not authorized to perform this operation

I have a below terraform script which works fine when use it on terminal.
provider "aws" {
region = "${var.aws_region}"
}
resource "aws_instance" "jenkins-poc" {
count = "2"
ami = "${var.aws_ami}"
instance_type = "${var.instance_type}"
key_name = "${var.key_name}"
availability_zone = "${var.aws_region}${element(split(",",var.zones),count.index)}"
vpc_security_group_ids = ["${aws_security_group.jenkins-poc.id}"]
subnet_id = "${element(split(",",var.subnet_id),count.index)}"
user_data = "${file("userdata.sh")}"
tags {
Name = "jenkins-poc${count.index + 1}"
Owner = "Shailesh"
}
}
resource "aws_security_group" "jenkins-poc" {
vpc_id = "${var.vpc_id}"
name = "${var.security_group_name}"
description = "Allow http,httpd and SSH"
ingress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["10.0.0.0/8"]
}
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["10.0.0.0/8"]
}
egress {
from_port = "0"
to_port = "0"
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_elb" "jenkins-poc-elb" {
name = "jenkins-poc-elb"
subnets = ["subnet-","subnet-"]
listener {
instance_port = 80
instance_protocol = "http"
lb_port = "80"
lb_protocol = "http"
}
health_check {
healthy_threshold = "2"
unhealthy_threshold = "3"
timeout = "3"
target = "tcp:80"
interval = 30
}
instances = ["${aws_instance.jenkins-poc.*.id}"]
}
and variables file is as given below.
variable "aws_ami" {
default = "ami-"
}
variable "zones"{
default = "a,b"
}
variable "aws_region" {
default = "us-east-1"
}
variable "key_name" {
default = "test-key"
}
variable "instance_type" {
default = "t2.micro"
}
variable "count" {
default = "2"
}
variable "security_group_name" {
default = "jenkins-poc"
}
variable "vpc_id" {
default = "vpc-"
}
variable "subnet_id" {
default = "subnet-,subnet"
}
Everything works fine when I run through terminal using terraform apply. But same code gives me below error when I run it through jenkins.
aws_security_group.jenkins-poc: Error creating Security Group: UnauthorizedOperation: You are not authorized to perform this operation
Note :: This is a non-default vpc in which I am performing this operation.
I would highly appreciate any comments. I didn't mention sensitive values.
Just make sure if you are in the right aws profile and the default aws profile could restrict you from creating the instance
provider "aws" {
region = "${var.aws_region}"
shared_credentials_file = "~/.aws/credentials"
profile = "xxxxxxx"
}