Cloud-init File skip Github registration - amazon-web-services

I want to deploy my Github Repository with a terraform and a cloud-init file to aws. Iam doing this whole process with github actions.There they devilered me some commands to enter:
#cloud-config
runcmd:
- curl -o actions-runner-linux-x64-2.300.2.tar.gz -L https://github.com/actions/runner/releases/download/v2.300.2/actions-runner-linux-x64-2.300.2.tar.gz
- echo ed5bf2799c1ef7b2dd607df66e6b676dff8c44fb359c6fedc9ebf7db53339f0c actions-runner-linux-x64-2.300.2.tar.gz" | shasum -a 256 -c
- tar xzf ./actions-runner-linux-x64-2.300.2.tar.gz
- ./config.sh --url https://github.com/yuuval/react-deploy-aws --token AVYXWHVAXX2TB4J63XBJCIDDYB6TA
is a registration from github. There i have to enter 3 times the enter key. Now my Question is, how can i skip or tell my script, that he have to enter this key 3 times?
Whole Cloud init File:
#cloud-config
runcmd:
- mkdir react
- cd react
- curl -o actions-runner-linux-x64-2.300.2.tar.gz -Lhttps://github.com/actions/runner/releases/download/v2.300.2/actions-runner-linux-x64-2.300.2.tar.gz
- echo
ed5bf2799c1ef7b2dd607df66e6b676dff8c44fb359c6fedc9ebf7db53339f0c
actions-runner-linux-x64-2.300.2.tar.gz" | shasum -a 256 -c
- tar xzf ./actions-runner-linux-x64-2.300.2.tar.gz
- ./config.sh --url https://github.com/yuuval/react-deploy-aws --token AVYXWHVAXX2TB4J63XBJCIDDYB6TA
- sudo ./svc.sh install
- sudo ./svc.sh start
- sudo apt install nginx
- cd _work
- cd react-deploy-aws
- cd react-deploy-aws
- cd /etc/nginx/sites-available
- sudo rm default
- echo "server {listen 80 default_server;server_name _;location /
{root
/home/ubuntu/react/_work/react-deploy-aws/react-deploy-
aws/build;try_files
\$uri /index.html;}}" | sudo tee /etc/nginx/sites-available/default
- sudo service nginx restart
- sudo chmod +x /home
- sudo chmod +x /home/ubuntu
- sudo chmod +x /home/ubuntu/react
- sudo chmod +x /home/ubuntu/react/_work
- sudo chmod +x /home/ubuntu/react/_work/react-deploy-aws
- sudo chmod +x /home/ubuntu/react/_work/react-deploy-aws/react-
deploy-aws
- sudo chmod +x /home/ubuntu/react/_work/react-deploy-aws/build
terraform file:
terraform {
  required_providers {
    aws = {
      source = "hashicorp/aws"
      version = "~> 4.16"
    }
  }
  required_version = ">= 1.2.0"
} provider "aws" {
  region = "us-east-1b"
} data "template_file" "nginx" {
  template = file("./cloud-init.yaml")
} resource "aws_security_group" "gradebook" {
  name        = "gradebook"
  description = "Security group for Gradebook server"   ingress {
    protocol   = "tcp"
    from_port  = 22
    to_port    = 22
    cidr_blocks = ["0.0.0.0/0"]
  }   ingress {
    protocol   = "tcp"
    from_port  = 80
    to_port    = 80
    cidr_blocks = ["0.0.0.0/0"]
  }
  ingress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
  ingress {
    protocol   = "tcp"
    from_port  = 443
    to_port    = 443
    cidr_blocks = ["0.0.0.0/0"]
  }   egress {
    protocol   = "-1"
    from_port  = 0
    to_port    = 0
    cidr_blocks = ["0.0.0.0/0"]
  }
} resource "aws_instance" "web_server" {
  ami          = "ami-0574da719dca65348"
  instance_type = "t2.small"
  vpc_security_group_ids = [aws_security_group.gradebook.id]
  user_data = data.template_file.nginx.rendered   tags = {
    Name = "GradebookWebServer"
  }
}
I dont have any clue how to solve this.
I want, that the cloud-init file is surpassing the terraform apply command.
I want, that the registration part which takes place in this command:
./config.sh --url https://github.com/yuuval/react-deploy-aws --token AVYXWHVAXX2TB4J63XBJCIDDYB6TA
can be skipped. There are 3 steps within this command. There you should enter the "Enter" Key

You can use the expect command to automate the process of entering the 3 keys. You can add the following commands to your cloud-init file:
- apt-get update && apt-get install -y expect
- expect -c "spawn ./config.sh --url https://github.com/yuuval/react-deploy-aws --token AVYXWHVAXX; expect \"Press Enter to continue\"; send \"\r\"; expect \"Press Enter to continue\"; send \"\r\"; expect \"Press Enter to continue\"; send \"\r\"; interact"
The expect command will run the ./config.sh command, and automatically enter the "Enter" key when prompted with the "Press Enter to continue" message, 3 times.
Please note that this solution will work if the messages on the cloud-init file are the same.

Related

httpserver in EC2 instance via terraform

terraform {
required_providers {
aws = {
version = "~>3.27"
source = "hashicorp/aws"
}
}
}
provider "aws" {
profile = "default"
region = "us-west-2"
}
variable "tag_name" {
type = string
}
resource "aws_instance" "app_server" {
ami = "ami-830c94e3"
instance_type = "t2.micro"
vpc_security_group_ids = [aws_security_group.allow_port_8080.id]
user_data = <<-EOF
#!/bin/bash
# Use this for your user data (script from top to bottom)
# install httpd (Linux 2 version)
yum update -y
yum install -y httpd
systemctl start httpd
systemctl enable httpd
echo "<h1>Hello World from $(hostname -f)</h1>" > /var/www/html/index.html
EOF
tags = {
Name = var.tag_name
}
}
resource "aws_security_group" "allow_port_8080" {
name = "allow_port_8080"
ingress {
from_port = 8080
to_port = 8080
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
}
this is the terraform file created. I want to set up http server in my EC2 instance then to access it via ipv4 public IP.
but http://publicip:8080, giving error as
This site can’t be reached
I tried modifying as below
user_data = <<-EOF
#!/bin/bash
echo "<h1>Hello World</h1>" > index.html
nohup busybox httpd -f -p 8080
EOF
I am following
https://www.youtube.com/watch?v=0i-Q6ZMDtlQ&list=PLqq-6Pq4lTTYwjFB9E9aLUJhTBLsCF0p_&index=32
thank you
Your aws_security_group does not allow for any outgoing traffic, thus you can't install httpd on it. You have to explicitly allow outgoing traffic:
resource "aws_security_group" "allow_port_8080" {
name = "allow_port_8080"
ingress {
from_port = 8080
to_port = 8080
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}
}

AWS ECS Fargate run task: Essential container in task exited

Goal:
Create an interactive shell within an ECS Fargate container
Problem:
After running a task within the ECS service, the task status immediately goes to STOPPED after Pending and gives the following stopped reason: Essential container in task exited. Since the task is stopped, creating an interactive shell with the aws ecs execute-command is not feasible.
Background:
Using a custom ECR image for the target container
Cloudwatch logs show that the ECR image associated entrypoint.sh was successful
Dockerfile:
FROM python:3.9-alpine AS build
ARG TERRAFORM_VERSION=1.0.2
ARG TERRAGRUNT_VERSION=0.31.0
ARG TFLINT_VERSION=0.23.0
ARG TFSEC_VERSION=0.36.11
ARG TFDOCS_VERSION=0.10.1
ARG GIT_CHGLOG_VERSION=0.14.2
ARG SEMTAG_VERSION=0.1.1
ARG GH_VERSION=2.2.0
ARG TFENV_VERSION=2.2.2
ENV VIRTUAL_ENV=/opt/venv
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
WORKDIR /src/
COPY install.sh ./install.sh
COPY requirements.txt ./requirements.txt
RUN chmod u+x ./install.sh \
&& sh ./install.sh
FROM python:3.9-alpine
ENV VIRTUAL_ENV=/opt/venv
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
ENV PIP_DISABLE_PIP_VERSION_CHECK=1
ENV PATH="/usr/local/.tfenv/bin:$PATH"
WORKDIR /src/
COPY --from=build /usr/local /usr/local
COPY --from=build $VIRTUAL_ENV $VIRTUAL_ENV
ENV PATH="$VIRTUAL_ENV/bin:$VIRTUAL_ENV/lib/python3.9/site-packages:$PATH"
RUN apk update \
&& apk add --virtual .runtime \
bash \
git \
curl \
jq \
# needed for bats --pretty formatter
ncurses \
openssl \
grep \
# needed for pcregrep
pcre-tools \
coreutils \
postgresql-client \
libgcc \
libstdc++ \
ncurses-libs \
docker \
&& ln -sf python3 /usr/local/bin/python \
&& git config --global advice.detachedHead false \
&& git config --global user.email testing_user#users.noreply.github.com \
&& git config --global user.name testing_user
COPY entrypoint.sh ./entrypoint.sh
ENTRYPOINT ["bash", "entrypoint.sh"]
CMD ["/bin/bash"]
entrypoint.sh:
if [ -n "$ADDITIONAL_PATH" ]; then
echo "Adding to PATH: $ADDITIONAL_PATH"
export PATH="$ADDITIONAL_PATH:$PATH"
fi
source $VIRTUAL_ENV/bin/activate
pip install -e /src
echo "done"
Terraform configurations for ECS: (Using this AWS blog post as a reference)
data "aws_caller_identity" "current" {}
data "aws_region" "current" {}
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
name = local.mut_id
cidr = "10.0.0.0/16"
azs = ["us-west-2a", "us-west-2b", "us-west-2c", "us-west-2d"]
enable_dns_hostnames = true
public_subnets = local.public_subnets
create_database_subnet_group = true
database_dedicated_network_acl = true
database_inbound_acl_rules = [
{
rule_number = 1
rule_action = "allow"
from_port = 5432
to_port = 5432
protocol = "tcp"
cidr_block = local.private_subnets[0]
}
]
database_subnet_group_name = "metadb"
database_subnets = local.database_subnets
private_subnets = local.private_subnets
private_dedicated_network_acl = true
private_outbound_acl_rules = [
{
rule_number = 1
rule_action = "allow"
from_port = 5432
to_port = 5432
protocol = "tcp"
cidr_block = local.database_subnets[0]
}
]
enable_nat_gateway = true
single_nat_gateway = true
one_nat_gateway_per_az = false
}
module "ecr_testing_img" {
source = "github.com/marshall7m/terraform-aws-ecr/modules//ecr-docker-img"
create_repo = true
source_path = "${path.module}/../.."
repo_name = "${local.mut_id}-integration-testing"
tag = "latest"
trigger_build_paths = [
"${path.module}/../../Dockerfile",
"${path.module}/../../entrypoint.sh",
"${path.module}/../../install.sh"
]
}
module "testing_kms" {
source = "github.com/marshall7m/terraform-aws-kms/modules//cmk"
trusted_admin_arns = [data.aws_caller_identity.current.arn]
trusted_service_usage_principals = ["ecs-tasks.amazonaws.com"]
}
module "testing_ecs_task_role" {
source = "github.com/marshall7m/terraform-aws-iam/modules//iam-role"
role_name = "${local.mut_id}-task"
trusted_services = ["ecs-tasks.amazonaws.com"]
statements = [
{
effect = "Allow"
actions = ["kms:Decrypt"]
resources = [module.testing_kms.arn]
},
{
effect = "Allow"
actions = [
"ssmmessages:CreateControlChannel",
"ssmmessages:CreateDataChannel",
"ssmmessages:OpenControlChannel",
"ssmmessages:OpenDataChannel"
]
resources = ["*"]
}
]
}
module "testing_ecs_execution_role" {
source = "github.com/marshall7m/terraform-aws-iam/modules//iam-role"
role_name = "${local.mut_id}-exec"
trusted_services = ["ecs-tasks.amazonaws.com"]
custom_role_policy_arns = ["arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy"]
}
resource "aws_ecs_cluster" "testing" {
name = "${local.mut_id}-integration-testing"
configuration {
execute_command_configuration {
kms_key_id = module.testing_kms.arn
logging = "DEFAULT"
}
}
}
resource "aws_ecs_service" "testing" {
name = "${local.mut_id}-integration-testing"
task_definition = aws_ecs_task_definition.testing.arn
cluster = aws_ecs_cluster.testing.id
desired_count = 0
enable_execute_command = true
launch_type = "FARGATE"
platform_version = "1.4.0"
network_configuration {
subnets = [module.vpc.public_subnets[0]]
security_groups = [aws_security_group.testing.id]
assign_public_ip = true
}
wait_for_steady_state = true
}
resource "aws_cloudwatch_log_group" "testing" {
name = "${local.mut_id}-ecs"
}
resource "aws_ecs_task_definition" "testing" {
family = "integration-testing"
requires_compatibilities = ["FARGATE"]
task_role_arn = module.testing_ecs_task_role.role_arn
execution_role_arn = module.testing_ecs_execution_role.role_arn
network_mode = "awsvpc"
cpu = 256
memory = 512
container_definitions = jsonencode([{
name = "testing"
image = module.ecr_testing_img.full_image_url
linuxParameters = {
initProcessEnabled = true
}
logConfiguration = {
logDriver = "awslogs",
options = {
awslogs-group = aws_cloudwatch_log_group.testing.name
awslogs-region = data.aws_region.current.name
awslogs-stream-prefix = "testing"
}
}
cpu = 256
memory = 512
}])
runtime_platform {
operating_system_family = "LINUX"
cpu_architecture = "X86_64"
}
}
resource "aws_security_group" "testing" {
name = "${local.mut_id}-integration-testing-ecs"
description = "Allows internet access request from testing container"
vpc_id = module.vpc.vpc_id
egress {
description = "Allows outbound HTTP access for installing packages within container"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
description = "Allows outbound HTTPS access for installing packages within container"
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
}
Snippet of Bash script that runs the ECS task and execute command within container:
task_id=$(aws ecs run-task \
--cluster "$cluster_arn" \
--task-definition "$task_arn" \
--launch-type FARGATE \
--platform-version '1.4.0' \
--enable-execute-command \
--network-configuration awsvpcConfiguration="{subnets=[$subnet_id],securityGroups=[$sg_id],assignPublicIp=ENABLED}" \
--region $AWS_REGION | jq -r '.tasks[0].taskArn | split("/") | .[-1]')
echo "Task ID: $task_id"
if [ "$run_ecs_exec_check" == true ]; then
bash <( curl -Ls https://raw.githubusercontent.com/aws-containers/amazon-ecs-exec-checker/main/check-ecs-exec.sh ) "$cluster_arn" "$task_id"
fi
sleep_time=10
status=""
echo ""
echo "Waiting for task to be running"
while [ "$status" != "RUNNING" ]; do
echo "Checking status in $sleep_time seconds..."
sleep $sleep_time
status=$(aws ecs describe-tasks \
--cluster "$cluster_arn" \
--region $AWS_REGION \
--tasks "$task_id" | jq -r '.tasks[0].containers[0].managedAgents[] | select(.name == "ExecuteCommandAgent") | .lastStatus')
echo "Status: $status"
if [ "$status" == "STOPPED" ]; then
aws ecs describe-tasks \
--cluster "$cluster_arn" \
--region $AWS_REGION \
--tasks "$task_id"
exit 1
fi
# sleep_time=$(( $sleep_time * 2 ))
done
echo "Running interactive shell within container"
aws ecs execute-command \
--region $AWS_REGION \
--cluster "$cluster_arn" \
--task "$task_id" \
--command "/bin/bash" \
--interactive
As soon as the last command in your entrypoint.sh finishes, the docker container is going to exit. Just like if you ran the docker container locally. I suggest working on getting a docker container to run locally without exiting first, and then deploying that to ECS.
A command like tail -f /dev/null will work if you just want the container to sit there doing nothing.

Terraform aws_eks_node_group creation error with launch_template "Unsupported - The requested configuration is currently not supported"

Objective of my effort: Create EKS node with Custom AMI(ubuntu)
Issue Statement: On creating aws_eks_node_group along with launch_template, I am getting an error:
Error: error waiting for EKS Node Group (qa-svr-centinela-eks-cluster01:qa-svr-centinela-nodegroup01) creation: AsgInstanceLaunchFailures: Could not launch On-Demand Instances. Unsupported - The requested configuration is currently not supported. Please check the documentation for supported configurations. Launching EC2 instance failed.. Resource IDs: [eks-82bb24f0-2d7e-ba9d-a80a-bb9653cde0c6]
Research so far: As per AWS we can start using custom AMIs for EKS.
Now the custom ubuntu image that I am using, is built with Packer, and I was encrypting the boot, and using AWS KMS External key for that purpose. At first I thought maybe the encryption used for the AMI is causing problem. So I removed the encryption for the AMI from the packer code.
But it didn't resolve the issue. Maybe I am not thinking in the right direction?
Any help is much appreciated. Thanks.
Terraform code used is in the post below.
I am attempting to create an EKS node group with launch template. But getting into an error.
packer code
source "amazon-ebs" "ubuntu18" {
ami_name = "pxx3"
ami_virtualization_type = "hvm"
tags = {
"cc" = "sxx1"
"Name" = "packerxx3"
}
region = "us-west-2"
instance_type = "t3.small"
# AWS Ubuntu AMI
source_ami = "ami-0ac73f33a1888c64a"
associate_public_ip_address = true
ebs_optimized = true
# public subnet
subnet_id = "subnet-xx"
vpc_id = "vpc-xx"
communicator = "ssh"
ssh_username = "ubuntu"
}
build {
sources = [
"source.amazon-ebs.ubuntu18"
]
provisioner "ansible" {
playbook_file = "./ubuntu.yml"
}
}
ubuntu.yml - only used for installing a few libraries
---
- hosts: default
gather_facts: no
become: yes
tasks:
- name: create the license key for new relic agent
shell: |
curl -s https://download.newrelic.com/infrastructure_agent/gpg/newrelic-infra.gpg | apt-key add - && \
printf "deb [arch=amd64] https://download.newrelic.com/infrastructure_agent/linux/apt bionic main" | tee -a /etc/apt/sources.list.d/newrelic-infra.list
- name: check sources.list
shell: |
cat /etc/apt/sources.list.d/newrelic-infra.list
- name: apt-get update
apt: update_cache=yes force_apt_get=yes
- name: install new relic agent
package:
name: newrelic-infra
state: present
- name: update apt-get repo and cache
apt: update_cache=yes force_apt_get=yes
- name: apt-get upgrade
apt: upgrade=dist force_apt_get=yes
- name: install essential softwares
package:
name: "{{ item }}"
state: latest
loop:
- software-properties-common
- vim
- nano
- glibc-source
- groff
- less
- traceroute
- whois
- telnet
- dnsutils
- git
- mlocate
- htop
- zip
- unzip
- curl
- ruby-full
- wget
ignore_errors: yes
- name: Add the ansible PPA to your system’s sources list
apt_repository:
repo: ppa:ansible/ansible
state: present
mode: 0666
- name: Add the deadsnakes PPA to your system’s sources list
apt_repository:
repo: ppa:deadsnakes/ppa
state: present
mode: 0666
- name: install softwares
package:
name: "{{ item }}"
state: present
loop:
- ansible
- python3.8
- python3-winrm
ignore_errors: yes
- name: install AWS CLI
shell: |
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
./aws/install
aws_eks_node_group configuration.
resource "aws_eks_node_group" "nodegrp" {
cluster_name = aws_eks_cluster.eks.name
node_group_name = "xyz-nodegroup01"
node_role_arn = aws_iam_role.eksnode.arn
subnet_ids = [data.aws_subnet.tf_subnet_private01.id, data.aws_subnet.tf_subnet_private02.id]
scaling_config {
desired_size = 2
max_size = 2
min_size = 2
}
depends_on = [
aws_iam_role_policy_attachment.nodepolicy01,
aws_iam_role_policy_attachment.nodepolicy02,
aws_iam_role_policy_attachment.nodepolicy03
]
launch_template {
id = aws_launch_template.eks.id
version = aws_launch_template.eks.latest_version
}
}
aws_launch_template configuration.
resource "aws_launch_template" "eks" {
name = "${var.env}-launch-template"
update_default_version = true
block_device_mappings {
device_name = "/dev/sda1"
ebs {
volume_size = 50
}
}
credit_specification {
cpu_credits = "standard"
}
ebs_optimized = true
# AMI generated with packer (is private)
image_id = "ami-0ac71233a184566453"
instance_type = t3.micro
key_name = "xyz"
network_interfaces {
associate_public_ip_address = false
}
}

How to create AWS AMI from created instance using terraform?

I am setting up an aws instance with wordpress installation and want to create an AMI using created instance. Below I attach my code.
provider "aws" {
region = "${var.region}"
access_key = "${var.access_key}"
secret_key = "${var.secret_key}"
}
resource "aws_instance" "test-wordpress" {
ami = "${var.image_id}"
instance_type = "${var.instance_type}"
key_name = "test-web"
#associate_public_ip_address = yes
user_data = <<-EOF
#!/bin/bash
sudo yum update -y
sudo amazon-linux-extras install -y lamp-mariadb10.2-php7.2 php7.2
sudo yum install -y httpd mariadb-server
cd /var/www/html
sudo echo "healthy" > healthy.html
sudo wget https://wordpress.org/latest.tar.gz
sudo tar -xzf latest.tar.gz
sudo cp -r wordpress/* /var/www/html/
sudo rm -rf wordpress
sudo rm -rf latest.tar.gz
sudo chmod -R 755 wp-content
sudo chown -R apache:apache wp-content
sudo service httpd start
sudo chkconfig httpd on
EOF
tags = {
Name = "test-Wordpress-Server"
}
}
resource "aws_ami_from_instance" "test-wordpress-ami" {
name = "test-wordpress-ami"
source_instance_id = "${aws_instance.test-wordpress.id}"
depends_on = [
aws_instance.test-wordpress,
]
tags = {
Name = "test-wordpress-ami"
}
}
AMI will be created but When I use that AMI to create an another instance wordpress installation not in there. How can I solve this issue?
The best way to create AMI images i think is using Packer, also from Hashicorp like terraform.
What is Packer?
Provision Infrastructure with Packer Packer is HashiCorp's open-source tool for creating machine images from source
configuration. You can configure Packer images with an operating
system and software for your specific use-case.
Packer creates an instance with temporary keypair, security_group and IAM roles. In the provisioner "shell" are custom inline commands possible. Afterwards you can use this ami with your terraform code.
A sample script could look like this:
packer {
required_plugins {
amazon = {
version = ">= 0.0.2"
source = "github.com/hashicorp/amazon"
}
}
}
source "amazon-ebs" "linux" {
# AMI Settings
ami_name = "ami-oracle-python3"
instance_type = "t2.micro"
source_ami = "ami-xxxxxxxx"
ssh_username = "ec2-user"
associate_public_ip_address = false
ami_virtualization_type = "hvm"
subnet_id = "subnet-xxxxxx"
launch_block_device_mappings {
device_name = "/dev/xvda"
volume_size = 8
volume_type = "gp2"
delete_on_termination = true
encrypted = false
}
# Profile Settings
profile = "xxxxxx"
region = "eu-central-1"
}
build {
sources = [
"source.amazon-ebs.linux"
]
provisioner "shell" {
inline = [
"export no_proxy=localhost"
]
}
}
You can find documentation here.
So you can search for AMI by your tag as described in documentation
In your case:
data "aws_ami" "example" {
executable_users = ["self"]
most_recent = true
owners = ["self"]
filter {
name = "tag:Name"
values = ["test-wordpress-ami"]
}
}
and then refer ID as ${data.aws_ami.example.image_id}

terraform user-data not working though rendered

I am creating an ec2 machine and when it is up, I want docker installed in it and add itself to rancher server
the terraform script for the same is
provider "aws" {
region = "ap-south-1"
}
resource "aws_instance" "rancherHost" {
tags {
name = "tf-rancher-host-rmf44"
}
ami = "ami-0189d76e"
instance_type = "t2.micro"
vpc_security_group_ids = ["${aws_security_group.port500and4500.id}"]
user_data = "${data.template_file.user-data.rendered}"
}
data template_file "user-data"
{
template = "${file("script.sh")}"
}
resource "aws_security_group" "port500and4500" {
ingress {
from_port = 500
protocol = "udp"
to_port = 500
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 4500
protocol = "udp"
to_port = 4500
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 443
protocol = "tcp"
to_port = 443
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 80
protocol = "tcp"
to_port = 80
cidr_blocks = ["0.0.0.0/0"]
}
}
output "instanceDetail" {
value = "${aws_instance.rancherHost.public_ip}"
}
And the script I have written is in the same directory, the contents are :
#!/bin/bash
echo "fail fast learn fast"
echo "this is second line"
apt-get update -y || echo $?
apt-get install -y docker
apt-get install -y docker.io
echo "echo failing now maybe"
service docker start
When I ran this, the machine is created also the user-data is visible, also I checked the logs, only first two echos were present and nothing else. am I doing something wrong here, manual creation using same user data worked and the host was added to rancher server.
os is ubuntu 16.04