failed to create ec2 instance using terraform if set security group - amazon-web-services

I tried to create an EC2 instance. When I don't set security group, it's good, but when set security group it failed with the following message:
│ Error: creating EC2 Instance: InvalidParameterValue: Value () for parameter groupId is invalid. The value cannot be empty
│ status code: 400, request id: 2935799e-2364-4676-ba02-457740336cd1
│
│ with aws_instance.my_first_instance,
│ on main.tf line 44, in resource "aws_instance" "my_first_instance":
│ 44: resource "aws_instance" "my_first_instance" {
The code is
variable "ecs_cluster_name" {
type = string
default = "production"
}
data "aws_ami" "ecs_ami" {
most_recent = true
owners = ["amazon"]
filter {
name = "name"
values = ["amzn2-ami-ecs-hvm-2.0.202*-x86_64-ebs"]
}
}
output "ami_name" {
value = data.aws_ami.ecs_ami.name
description = "the name of ecs ami"
}
output "security_group_id" {
value = aws_security_group.default.id
description = "id of security group"
}
resource "aws_security_group" "default" {
name = "terraform_Security_group"
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_instance" "my_first_instance" {
ami = data.aws_ami.ecs_ami.id
instance_type = "t2.micro"
# security_groups = ["sg-06e91dae98b2c44c6"]
security_groups = [aws_security_group.default.id]
user_data = <<-EOF
#!/bin/bash
echo ECS_CLUSTER={cluster_name} >> /etc/ecs/ecs.config
EOF
}

You should be using vpc_security_group_ids:
vpc_security_group_ids = [aws_security_group.default.id]

Related

Inappropriate value for attribute "security_groups": element 0: string required

I'm not sure why I'm getting this value.
I have this resource in bastion/main.tf
resource "aws_security_group" "bastion_sg" {
name = "${var.name}-bastion-security-group"
vpc_id = var.vpc_id
ingress {
protocol = "tcp"
from_port = 22
to_port = 22
cidr_blocks = ["0.0.0.0/0"]
}
egress {
protocol = -1
from_port = 0
to_port = 0
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "${var.name}-bastion-sg"
}
}
here is my output for that bastion/outputs.tf
output "bastion_sg_id" {
value = aws_security_group.bastion_sg
}
My eks module in my root directory main.tf
module "eks" {
source = "./eks"
name = var.name
key_name = module.bastion.key_name
bastion_sg = module.bastion.bastion_sg_id
vpc_id = module.networking.vpc_id
private_subnets = module.networking.vpc_private_subnets
}
my variables in my eks/variables.tf
variable "bastion_sg" {
description = "bastion sg to add to ingress rule of node sg"
}
lastly, my eks/main.tf where the error is occuring
esource "aws_security_group" "node-sg" {
name = "${var.name}-node-security-group"
vpc_id = var.vpc_id
ingress {
protocol = "tcp"
from_port = 22
to_port = 22
security_groups = [var.bastion_sg]
}
egress {
protocol = -1
from_port = 0
to_port = 0
cidr_blocks = ["0.0.0.0/0"]
}
}
I tried it with and without the [] for the security_groups argument and when I did it without I got the set of strings required error and when I added the [] I got this error
on eks\main.tf line 95, in resource "aws_security_group" "node-sg":
│ 95: security_groups = [var.bastion_sg]
│ ├────────────────
│ │ var.bastion_sg is object with 13 attributes
│
│ Inappropriate value for attribute "security_groups": element 0: string required.
It should be:
output "bastion_sg_id" {
value = aws_security_group.bastion_sg.id
}

Connection time out when using Terraform

I tried to create instance from a subnet and vpc id but am having issue with the provision remote exec.The purpose of this is to create 2 public subnets(eu-west-1a) and 2 private subnets(eu-west-1b) and use the subnet and vpc id in it to create an instance and then ssh and install nginx. I am not sure how to resolve this and unfortunately am not expert in Terraform so guidance is needed here. When I tried to also ssh it using the command prompt, it is saying connection timed out. The port is open in security group port 22
╷
│
Error: remote-exec provisioner error
│
│ with aws_instance.EC2InstanceCreate,
│ on main_ec2.tf line 11, in resource "aws_instance" "EC2InstanceCreate":
│ 11: provisioner "remote-exec" {
│
│ timeout - last error: dial tcp 54.154.137.10:22: i/o timeout
╵
[1enter image description here
My code below :
`# Server Definition
resource "aws_instance" "EC2InstanceCreate" {
ami = "${var.aws_ami}"
instance_type = "${var.server_type}"
key_name = "${var.target_keypairs}"
subnet_id = "${var.target_subnet}"
provisioner "remote-exec" {
connection {
type = "ssh"
host = "${self.public_ip}"
user = "centos"
private_key = "${file("/home/michael/cs-104-michael/lesson6/EC2Tutorial.pem")}"
timeout = "5m"
}
inline = [
"sudo yum -y update",
"sudo yum -y install nginx",
"sudo service nginx start",
"sudo yum -y install wget, unzip",
]
}
tags = {
Name = "cs-104-lesson6-michael"
Environment = "TEST"
App = "React App"
}
}
output "pub_ip" {
value = ["${aws_instance.EC2InstanceCreate.public_ip}"]
depends_on = [aws_instance.EC2InstanceCreate]
}`
security group config :
# Create security group for webserver
resource "aws_security_group" "webserver_sg" {
name = "sg_ws_name"
vpc_id = "${var.target_vpc}"
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
description = "HTTP"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
description = "HTTP"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}
tags = {
Name = "Security Group VPC devmind"
Project = "demo-assignment"
}
}
subnet code :
resource "aws_subnet" "public-subnet" {
vpc_id = "${aws_vpc.default.id}"
cidr_block = "${var.public_subnet_2a_cidr}"
availability_zone = "eu-west-1a"
map_public_ip_on_launch = true
tags = {
Name = "Web Public subnet 1"
}
}
resource "aws_subnet" "public-subnet2" {
vpc_id = "${aws_vpc.default.id}"
cidr_block = "${var.public_subnet_2b_cidr}"
availability_zone = "eu-west-1a"
map_public_ip_on_launch = true
tags = {
Name = "Web Public subnet 2"
}
}
# Define private subnets
resource "aws_subnet" "private-subnet" {
vpc_id = "${aws_vpc.default.id}"
cidr_block = "${var.private_db_subnet_2a_cidr}"
availability_zone = "eu-west-1b"
map_public_ip_on_launch = false
tags = {
Name = "App Private subnet 1"
}
}
resource "aws_subnet" "private-subnet2" {
vpc_id = "${aws_vpc.default.id}"
cidr_block = "${var.private_db_subnet_2b_cidr}"
availability_zone = "eu-west-1b"
map_public_ip_on_launch = false
tags = {
Name = "App Private subnet 2"
}
}
vpc code :
# Define our VPC
resource "aws_vpc" "default" {
cidr_block = "${var.vpc_cidr}"
enable_dns_hostnames = true
tags = {
Name = "Devops POC VPC"
}
}
Internet gateway included code :
# Internet Gateway
resource "aws_internet_gateway" "gw" {
vpc_id = "${aws_vpc.default.id}"
tags = {
name = "VPC IGW"
}
}
You are not providing vpc_security_group_ids for your instance:
vpc_security_group_ids = [aws_security_group.webserver_sg.id]
There could be many other issues, such as incorrectly setup VPC which is not shown.

│ Error: Reference to undeclared resource

I am new to terraform and trying to make an instance of AWS (t2.nano) by the image below.
this is my tf file:
provider "aws" {
profile = "default"
region = "us-west-2"
}
resource "aws_s3_bucket" "prod_tf_course" {
bucket = "tf-course-20210607"
acl = "private"
}
resource "aws_default_vpc" "default" {}
resource "aws_security_group" "group_web"{
name = "prod_web"
description = "allow standard http and https ports inbound and everithing outbound"
ingress{
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress{
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress{
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
"Terraform" : "true"
}
}
resource "aws_instance" "prod_web"{
ami = "ami-05105e44227712eb6"
instance_type ="t2.nano"
vpc_security_group_ids = [
aws_security_group.prod_web.id
]
tags = {
"Terraform" : "true"
}
}
When I run the command terraform plan, its produces the following error:
$ terraform plan
╷
│ Error: Reference to undeclared resource
│
│ on prod.tf line 50, in resource "aws_instance" "prod_web":
│ 50: aws_security_group.prod_web.id
│
│ A managed resource "aws_security_group" "prod_web" has not been declared in
│ the root module.
╵
if someone can help me fix it , i will be so happy.
It should be:
vpc_security_group_ids = [
aws_security_group.group_web.id
]
as your aws_security_group is called group_web, not prod_web.

SSH key for AWS Autoscaling Group node not working

So I created the autoscaling group using the following Terraform files:
Autoscaling group:
resource "aws_autoscaling_group" "orbix-mvp" {
desired_capacity = 1
launch_configuration = "${aws_launch_configuration.orbix-mvp.id}"
max_size = 1
min_size = 1
name = "${var.project}-${var.stage}"
vpc_zone_identifier = ["${aws_subnet.orbix-mvp.*.id}"]
tag {
key = "Name"
value = "${var.project}-${var.stage}"
propagate_at_launch = true
}
tag {
key = "kubernetes.io/cluster/${var.project}-${var.stage}"
value = "owned"
propagate_at_launch = true
}
}
Launch configuration:
# This data source is included for ease of sample architecture deployment
# and can be swapped out as necessary.
data "aws_region" "current" {}
# EKS currently documents this required userdata for EKS worker nodes to
# properly configure Kubernetes applications on the EC2 instance.
# We utilize a Terraform local here to simplify Base64 encoding this
# information into the AutoScaling Launch Configuration.
# More information: https://docs.aws.amazon.com/eks/latest/userguide/launch-workers.html
locals {
orbix-mvp-node-userdata = <<USERDATA
#!/bin/bash
set -o xtrace
/etc/eks/bootstrap.sh --apiserver-endpoint '${aws_eks_cluster.orbix-mvp.endpoint}' --b64-cluster-ca '${aws_eks_cluster.orbix-mvp.certificate_authority.0.data}' '${var.project}-${var.stage}'
USERDATA
}
resource "aws_launch_configuration" "orbix-mvp" {
associate_public_ip_address = true
iam_instance_profile = "${aws_iam_instance_profile.orbix-mvp-node.name}"
image_id = "${data.aws_ami.eks-worker.id}"
instance_type = "c5.large"
name_prefix = "${var.project}-${var.stage}"
security_groups = ["${aws_security_group.orbix-mvp-node.id}"]
user_data_base64 = "${base64encode(local.orbix-mvp-node-userdata)}"
key_name = "devops"
lifecycle {
create_before_destroy = true
}
}
So I've added the already generated SSH key under name devops to the launch configuration. I can SSH into manually created EC2 instances with that key, however I can't seem to SSH into instances created by this config.
Any help is appreciated, thanks :)
EDIT:
Node Security Group Terraform:
resource "aws_security_group" "orbix-mvp-node" {
name = "${var.project}-${var.stage}-node"
description = "Security group for all nodes in the ${var.project}-${var.stage} cluster"
vpc_id = "${aws_vpc.orbix-mvp.id}"
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = "${
map(
"Name", "${var.project}-${var.stage}-node",
"kubernetes.io/cluster/${var.project}-${var.stage}", "owned",
)
}"
}
resource "aws_security_group_rule" "demo-node-ingress-self" {
description = "Allow node to communicate with each other"
from_port = 0
protocol = "-1"
security_group_id = "${aws_security_group.orbix-mvp-node.id}"
source_security_group_id = "${aws_security_group.orbix-mvp-node.id}"
to_port = 65535
type = "ingress"
}
resource "aws_security_group_rule" "demo-node-ingress-cluster" {
description = "Allow worker Kubelets and pods to receive communication from the cluster control plane"
from_port = 1025
protocol = "tcp"
security_group_id = "${aws_security_group.orbix-mvp-node.id}"
source_security_group_id = "${aws_security_group.orbix-mvp-cluster.id}"
to_port = 65535
type = "ingress"
}
resource "aws_security_group_rule" "demo-node-port-22" {
description = "Add SSH access"
from_port = 22
protocol = "tcp"
security_group_id = "${aws_security_group.orbix-mvp-node.id}"
cidr_blocks = ["0.0.0.0/0"]
to_port = 22
type = "ingress"
}

Key Files in Terraform

Whenever I add key_name to my amazon resource, I can never actually connect to the resulting instance:
provider "aws" {
"region" = "us-east-1"
"access_key" = "**"
"secret_key" = "****"
}
resource "aws_instance" "api_server" {
ami = "ami-013f1e6b"
instance_type = "t2.micro"
"key_name" = "po"
tags {
Name = "API_Server"
}
}
output "API IP" {
value = "${aws_instance.api_server.public_ip}"
}
When I do
ssh -i ~/Downloads/po.pem bitnami#IP
I just a blank line in my terminal, as if I was putting in a wrong IP. However, checking the Amazon console, I can see the instance is running. I'm not getting any errors on my Terraform either.
By default all network access is not allowed. You need to explicitly allow network access by setting a security group.
provider "aws" {
"region" = "us-east-1"
"access_key" = "**"
"secret_key" = "****"
}
resource "aws_instance" "api_server" {
ami = "ami-013f1e6b"
instance_type = "t2.micro"
key_name = "po"
security_groups = ["${aws_security_group.api_server.id}"]
tags {
Name = "API_Server"
}
}
resource "aws_security_group" "api_server" {
name = "api_server"
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["XXX.XXX.XXX.XXX/32"] // Allow SSH from your global IP
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
output "API IP" {
value = "${aws_instance.api_server.public_ip}"
}