httpserver in EC2 instance via terraform - amazon-web-services

terraform {
required_providers {
aws = {
version = "~>3.27"
source = "hashicorp/aws"
}
}
}
provider "aws" {
profile = "default"
region = "us-west-2"
}
variable "tag_name" {
type = string
}
resource "aws_instance" "app_server" {
ami = "ami-830c94e3"
instance_type = "t2.micro"
vpc_security_group_ids = [aws_security_group.allow_port_8080.id]
user_data = <<-EOF
#!/bin/bash
# Use this for your user data (script from top to bottom)
# install httpd (Linux 2 version)
yum update -y
yum install -y httpd
systemctl start httpd
systemctl enable httpd
echo "<h1>Hello World from $(hostname -f)</h1>" > /var/www/html/index.html
EOF
tags = {
Name = var.tag_name
}
}
resource "aws_security_group" "allow_port_8080" {
name = "allow_port_8080"
ingress {
from_port = 8080
to_port = 8080
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
}
this is the terraform file created. I want to set up http server in my EC2 instance then to access it via ipv4 public IP.
but http://publicip:8080, giving error as
This site can’t be reached
I tried modifying as below
user_data = <<-EOF
#!/bin/bash
echo "<h1>Hello World</h1>" > index.html
nohup busybox httpd -f -p 8080
EOF
I am following
https://www.youtube.com/watch?v=0i-Q6ZMDtlQ&list=PLqq-6Pq4lTTYwjFB9E9aLUJhTBLsCF0p_&index=32
thank you

Your aws_security_group does not allow for any outgoing traffic, thus you can't install httpd on it. You have to explicitly allow outgoing traffic:
resource "aws_security_group" "allow_port_8080" {
name = "allow_port_8080"
ingress {
from_port = 8080
to_port = 8080
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}
}

Related

Cannot reach YUM repos on terraform EC2 instances

background:
I used terraform to build an AWS autoscaling group with a few instances spread across availability zones and linked by a load balancer. Everything is created properly, but the load balancer has no valid targets because they're nothing listening on port 80.
Fine, I thought. I'll install NGINX and throw up a basic config.
expected behavior
instances should be able to reach yum repos
actual behavior
I'm unable to ping anything or run any of the package manager commands, getting the following error
Could not retrieve mirrorlist https://amazonlinux-2-repos-us-east-2.s3.dualstack.us-east-2.amazonaws.com/2/core/latest/x86_64/mirror.list error was
12: Timeout on https://amazonlinux-2-repos-us-east-2.s3.dualstack.us-east-2.amazonaws.com/2/core/latest/x86_64/mirror.list: (28, 'Failed to connect to amazonlinux-2-repos-us-east-2.s3.dualstack.us-east-2.amazonaws.com port 443 after 2700 ms: Connection timed out')
One of the configured repositories failed (Unknown),
and yum doesn't have enough cached data to continue. At this point the only
safe thing yum can do is fail. There are a few ways to work "fix" this:
1. Contact the upstream for the repository and get them to fix the problem.
2. Reconfigure the baseurl/etc. for the repository, to point to a working
upstream. This is most often useful if you are using a newer
distribution release than is supported by the repository (and the
packages for the previous distribution release still work).
3. Run the command with the repository temporarily disabled
yum --disablerepo=<repoid> ...
4. Disable the repository permanently, so yum won't use it by default. Yum
will then just ignore the repository until you permanently enable it
again or use --enablerepo for temporary usage:
yum-config-manager --disable <repoid>
or
subscription-manager repos --disable=<repoid>
5. Configure the failing repository to be skipped, if it is unavailable.
Note that yum will try to contact the repo. when it runs most commands,
so will have to try and fail each time (and thus. yum will be be much
slower). If it is a very temporary problem though, this is often a nice
compromise:
yum-config-manager --save --setopt=<repoid>.skip_if_unavailable=true
Cannot find a valid baseurl for repo: amzn2-core/2/x86_64
troubleshooting steps taken
I'm new to Terraform, and I'm still having issues doing the automated provisioning of user_data, so I SSH'd into the instance. The instance is set up in a public subnet with an auto-provisioned public IP. Below is the code for the security groups.
resource "aws_security_group" "elb_webtrafic_sg" {
name = "elb-webtraffic-sg"
description = "Allow inbound web trafic to load balancer"
vpc_id = aws_vpc.main_vpc.id
ingress {
description = "HTTPS trafic from vpc"
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "HTTP trafic from vpc"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "allow SSH"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
description = "all traffic out"
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "elb-webtraffic-sg"
}
}
resource "aws_security_group" "instance_sg" {
name = "instance-sg"
description = "Allow traffic from load balancer to instances"
vpc_id = aws_vpc.main_vpc.id
ingress {
description = "web traffic from load balancer"
security_groups = [ aws_security_group.elb_webtrafic_sg.id ]
from_port = 80
to_port = 80
protocol = "tcp"
}
ingress {
description = "web traffic from load balancer"
security_groups = [ aws_security_group.elb_webtrafic_sg.id ]
from_port = 443
to_port = 443
protocol = "tcp"
}
ingress {
description = "ssh traffic from anywhere"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
description = "all traffic to load balancer"
security_groups = [ aws_security_group.elb_webtrafic_sg.id ]
from_port = 0
to_port = 0
protocol = "-1"
}
tags = {
Name = "instance-sg"
}
}
#this is a workaround for the cyclical security group id call
#I would like to figure out a way for this to destroy this first
#it currently takes longer to destroy than to set up
#terraform hangs because of the dependancy each SG has on each other,
#but will eventually struggle down to this rule and delete it, clearing the deadlock
resource "aws_security_group_rule" "elb_egress_to_webservers" {
security_group_id = aws_security_group.elb_webtrafic_sg.id
type = "egress"
source_security_group_id = aws_security_group.instance_sg.id
from_port = 80
to_port = 80
protocol = "tcp"
}
resource "aws_security_group_rule" "elb_tls_egress_to_webservers" {
security_group_id = aws_security_group.elb_webtrafic_sg.id
type = "egress"
source_security_group_id = aws_security_group.instance_sg.id
from_port = 443
to_port = 443
protocol = "tcp"
}
Since I was able to SSH into the machine, I tried to set up the web instance security group to allow direct connection from the internet to the instance. Same errors: cannot ping outside web addresses, same error on YUM commands.
I can ping the default gateway in each subnet. 10.0.0.1, 10.0.1.1, 10.0.2.1.
Here is the routing configuration I currently have setup.
resource "aws_vpc" "main_vpc" {
cidr_block = "10.0.0.0/16"
tags = {
Name = "production-vpc"
}
}
resource "aws_key_pair" "aws_key" {
key_name = "Tanchwa_pc_aws"
public_key = file(var.public_key_path)
}
#internet gateway
resource "aws_internet_gateway" "gw" {
vpc_id = aws_vpc.main_vpc.id
tags = {
Name = "internet-gw"
}
}
resource "aws_route_table" "route_table" {
vpc_id = aws_vpc.main_vpc.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.gw.id
}
tags = {
Name = "production-route-table"
}
}
resource "aws_subnet" "public_us_east_2a" {
vpc_id = aws_vpc.main_vpc.id
cidr_block = "10.0.0.0/24"
availability_zone = "us-east-2a"
tags = {
Name = "Public-Subnet us-east-2a"
}
}
resource "aws_subnet" "public_us_east_2b" {
vpc_id = aws_vpc.main_vpc.id
cidr_block = "10.0.1.0/24"
availability_zone = "us-east-2b"
tags = {
Name = "Public-Subnet us-east-2b"
}
}
resource "aws_subnet" "public_us_east_2c" {
vpc_id = aws_vpc.main_vpc.id
cidr_block = "10.0.2.0/24"
availability_zone = "us-east-2c"
tags = {
Name = "Public-Subnet us-east-2c"
}
}
resource "aws_route_table_association" "a" {
subnet_id = aws_subnet.public_us_east_2a.id
route_table_id = aws_route_table.route_table.id
}
resource "aws_route_table_association" "b" {
subnet_id = aws_subnet.public_us_east_2b.id
route_table_id = aws_route_table.route_table.id
}
resource "aws_route_table_association" "c" {
subnet_id = aws_subnet.public_us_east_2c.id
route_table_id = aws_route_table.route_table.id
}

unable to access EC2 instance created using terraform

I have been following youtube guide to learn terraform and have followed each steps.
After running terraform apply it was able to setup everything as expected. I have verified this on aws console. But while trying to access the public ip it is saying connection refused.
Below is the content of my main.tf file.
provider "aws" {
region = "us-east-1"
access_key = "ACCESS-KEY"
secret_key = "SECERT-KEY"
}
# VPC
resource "aws_vpc" "prod-vpc" {
cidr_block = "10.0.0.0/16"
tags = {
Name = "production"
}
}
# create internet gateway
resource "aws_internet_gateway" "gw" {
vpc_id = aws_vpc.prod-vpc.id
tags = {
Name : "Prod gateway"
}
}
# create custom route table
resource "aws_route_table" "prod-route-table" {
vpc_id = aws_vpc.prod-vpc.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.gw.id
}
route {
ipv6_cidr_block = "::/0"
gateway_id = aws_internet_gateway.gw.id
}
tags = {
Name = "Prod"
}
}
# Create a subnet
resource "aws_subnet" "subnet-1" {
vpc_id = aws_vpc.prod-vpc.id
cidr_block = "10.0.1.0/24"
availability_zone = "us-east-1a"
map_public_ip_on_launch = true
tags = {
Name = "prod-subnet"
}
}
# Associate subnet with Route Table
resource "aws_route_table_association" "a" {
subnet_id = aws_subnet.subnet-1.id
route_table_id = aws_route_table.prod-route-table.id
}
# Create Security Group to allow port 22, 80, 443
resource "aws_security_group" "allow_web" {
name = "allow_web_traffic"
description = "Allow Web traffic"
vpc_id = aws_vpc.prod-vpc.id
ingress {
description = "HTTPS"
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "HTTP"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "SSH"
from_port = 2
to_port = 2
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}
tags = {
Name = "allow_web"
}
}
# Create a network interface with an ip in the subnet that was created earlier
resource "aws_network_interface" "web-server-nic" {
subnet_id = aws_subnet.subnet-1.id
private_ips = ["10.0.1.50"]
security_groups = [aws_security_group.allow_web.id]
tags = {
Name : "prod-network-interface"
}
}
# Assign an elastic ip to the network interface created in previous step
resource "aws_eip" "one" {
vpc = true
network_interface = aws_network_interface.web-server-nic.id
associate_with_private_ip = "10.0.1.50"
depends_on = [aws_internet_gateway.gw, aws_instance.web-server-instance]
tags = {
Name : "Prod-Elastic-ip"
}
}
# Create Ubuntu server and install/enable apache2
resource "aws_instance" "web-server-instance" {
ami = "ami-0747bdcabd34c712a"
instance_type = "t2.micro"
availability_zone = "us-east-1a"
key_name = "main-key"
network_interface {
device_index = 0
network_interface_id = aws_network_interface.web-server-nic.id
}
user_data = <<-EOF
#! /bin/bash
sudo apt update -y
sudo apt install apache2
sudo bash -c 'echo your very first web server > /var/www/html/index.html'
EOF
tags = {
Name : "Web-Server"
}
}
You are missing -y in your user data, so your user-data will just hang for confirmation. It should be:
sudo apt install -y apache2
You missed that another command need to start this apache2 after installed it
sudo systemctl start apache2
It also seems that the SSH port on the security group is not configured correctly. It should probably read 22 instead of 2 for both from_port and to_port.
The major problem here is at security group. For SSH configuration you should open port 22, the default SSH port.
ingress {
description = "SSH"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
Besides that, you should fix your userdata with:
#! /bin/bash
sudo apt update -y
sudo apt install apache2 -y
sudo systemctl start apache2
sudo bash -c 'echo your very first web server > /var/www/html/index.html'
I hope this can help you and other people with the same or similar issue.

how to fix the 504 error in my load balancer

I am new to terraform and I am trying to create a simple structure with one ALB 2 servers with a simple app and one db instance, but I get a 504 error when accesing to the ALB`s DNS whoch checking the amazon documentation means The load balancer established a connection to the target but the target did not respond before the idle timeout period elapsed. I have gone over the code 100 times but I cannot find the mistake. This is my alb config:
#ASG
resource "aws_launch_configuration" "web-lc" {
name = "web-lc"
image_id = "ami-0fc970315c2d38f01"
instance_type = "t2.micro"
security_groups = [aws_security_group.ec2-webServers-sg.id]
key_name = "practica_final_kp"
user_data = <<-EOF
#!/bin/bash
sudo yum update -y
sudo yum install -y docker
sudo service docker start
sudo docker run -d --name rtb -p 8080:8080 vermicida/rtb
EOF
}
resource "aws_autoscaling_group" "ec2-web-asg" {
name = "ec2-web-asg"
max_size = 2
min_size = 2
force_delete = true
launch_configuration = aws_launch_configuration.web-lc.name
vpc_zone_identifier = [aws_subnet.public-subnet1.id, aws_subnet.public-subnet2.id]
tag {
key = "Name"
value = "ec2-web-asg"
propagate_at_launch = "true"
}
}
#ALB
resource "aws_alb_target_group" "tg-alb" {
name = "tg-alb"
port = 80
protocol = "HTTP"
target_type = "instance"
vpc_id = aws_vpc.final-vpc.id
}
resource "aws_alb" "web-alb" {
name = "web-alb"
internal = false
subnets = [aws_subnet.public-subnet1.id, aws_subnet.public-subnet2.id]
security_groups = [aws_security_group.lb-sg.id]
}
resource "aws_alb_listener" "front_end" {
load_balancer_arn = aws_alb.web-alb.id
port = "80"
protocol = "HTTP"
default_action {
target_group_arn = aws_alb_target_group.tg-alb.id
type = "forward"
}
}
resource "aws_autoscaling_attachment" "asg_attachment" {
autoscaling_group_name = aws_autoscaling_group.ec2-web-asg.id
alb_target_group_arn = aws_alb_target_group.tg-alb.arn
}
this is the security group:
resource "aws_security_group" "ec2-webServers-sg" {
name = "ec2-webServers-sg"
vpc_id = aws_vpc.final-vpc.id
ingress {
description = "APP"
from_port = 8080
to_port = 8080
protocol = "tcp"
cidr_blocks = ["10.0.1.0/24", "10.0.2.0/24"]
}
egress {
description = "SQL"
from_port = 3306
to_port = 3306
protocol = "tcp"
cidr_blocks = ["10.0.10.0/24", "10.0.20.0/24"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "F-web-servers-sg"
}
}
It looks like your service on the EC2 instance is running on port 8080, but your target group is pointing to port 80. You need to change the target group port to 8080.
There could also be a problem with security groups and VPC Network ACLs blocking the traffic, but you didn't include the definition of aws_security_group.ec2-webServers-sg.id in your question.

When I am running terraform apply

I am creating ec2 instance and this is my main.tf file
variable "aws_key_pair" {
default = "~/aws/aws_keys/terraform-ec2.pem"
}
provider "aws" {
region = "us-east-1"
version = "~>2.46"
}
resource "aws_security_group" "http_server_sg" {
name = "http_server_sg"
vpc_id = "vpc-c5f40fb8"
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = -1
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
name = "http_server_sg"
}
}
resource "aws_instance" "http_server" {
ami = "ami-0947d2ba12ee1ff75"
key_name = "terraform-ec2"
instance_type = "t2.micro"
vpc_security_group_ids = [aws_security_group.http_server_sg.id]
subnet_id = "subnet-1169c14e"
connection {
type = "ssh"
host = self.public_ip
user = "ec2_user"
private_key = file(var.aws_key_pair)
}
provisioner "remote_exec" {
inline = [
"sudo yum install httpd -y",
"sudo service httpd start",
"echo Welcome to virtual server setup by terraform , IP address ${self.public_dns} | sudo tee /var/www/html/index.html"
]
}
}
When I am running : terraform apply I am getting following error
Could not load plugin
Plugin reinitialization required. Please run "terraform init".
Plugins are external binaries that Terraform uses to access and manipulate
resources. The configuration provided requires plugins which can't be located,
don't satisfy the version constraints, or are otherwise incompatible.
Terraform automatically discovers provider requirements from your
configuration, including providers used in child modules. To see the
requirements and constraints, run "terraform providers".
Failed to instantiate provisioner "remote_exec" to obtain schema: unknown
provisioner "remote_exec"
But I have already done terraform init and when I am running terraform validate I am getting same above error
It's "remote-exec" ...

terraform user-data not working though rendered

I am creating an ec2 machine and when it is up, I want docker installed in it and add itself to rancher server
the terraform script for the same is
provider "aws" {
region = "ap-south-1"
}
resource "aws_instance" "rancherHost" {
tags {
name = "tf-rancher-host-rmf44"
}
ami = "ami-0189d76e"
instance_type = "t2.micro"
vpc_security_group_ids = ["${aws_security_group.port500and4500.id}"]
user_data = "${data.template_file.user-data.rendered}"
}
data template_file "user-data"
{
template = "${file("script.sh")}"
}
resource "aws_security_group" "port500and4500" {
ingress {
from_port = 500
protocol = "udp"
to_port = 500
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 4500
protocol = "udp"
to_port = 4500
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 443
protocol = "tcp"
to_port = 443
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 80
protocol = "tcp"
to_port = 80
cidr_blocks = ["0.0.0.0/0"]
}
}
output "instanceDetail" {
value = "${aws_instance.rancherHost.public_ip}"
}
And the script I have written is in the same directory, the contents are :
#!/bin/bash
echo "fail fast learn fast"
echo "this is second line"
apt-get update -y || echo $?
apt-get install -y docker
apt-get install -y docker.io
echo "echo failing now maybe"
service docker start
When I ran this, the machine is created also the user-data is visible, also I checked the logs, only first two echos were present and nothing else. am I doing something wrong here, manual creation using same user data worked and the host was added to rancher server.
os is ubuntu 16.04