I am trying to spin up an ECS cluster with Terraform, but can not make EC2 instances register as container instances in the cluster.
I first tried with the verified module from Terraform, but this seems out dated (ecs-instance-profile has wrong path).
Then I tried with another module from anrim, but still no container instances. Here is the script I used:
provider "aws" {
region = "us-east-1"
}
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "2.21.0"
name = "ecs-alb-single-svc"
cidr = "10.10.10.0/24"
azs = ["us-east-1a", "us-east-1b", "us-east-1c"]
private_subnets = ["10.10.10.0/27", "10.10.10.32/27", "10.10.10.64/27"]
public_subnets = ["10.10.10.96/27", "10.10.10.128/27", "10.10.10.160/27"]
tags = {
Owner = "user"
Environment = "me"
}
}
module "ecs_cluster" {
source = "../../modules/cluster"
name = "ecs-alb-single-svc"
vpc_id = module.vpc.vpc_id
vpc_subnets = module.vpc.private_subnets
tags = {
Owner = "user"
Environment = "me"
}
}
I then created a new ecs cluster (from the aws console) on the same VPC and carefully compared the differences in resources. I managed to find some small differences, fixed them and tried again. But still no container instances!
A fork of the module is available here.
Can you see instances being created in the autoscaling group? If so, I'd suggest SSHing to one of them (either directly or using a bastion host, eg. see this module) and checking ECS agent logs. In my experience those problems are usually related to IAM policies, and that's pretty visible in logs but YMMV.
Related
I want to create a EKS cluster using Terraform, build custom docker images and then perform Kubernetes deployments on the created cluster via terraform. I want to perform all the tasks with a single terraform apply. But I see that the kubernetes provider needs the details of cluster on initialization itself. Is there a way I can achieve both cluster creation and deployment using a single terraform apply, so that once the cluster is created, the cluster details can be passed to Kubernetes provider and then the pods are deployed.
Please let me know how I can achieve this?
I am doing this with ease , below is the pseudo code , you just need to be careful with the way you are using depends_on attribute with resources and try to encapsulate as much as possible
Kubernetes Provider in a separate file like kubernetes.tf
host = module.eks.cluster_endpoint
cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)
exec {
api_version = "client.authentication.k8s.io/v1alpha1"
command = "aws"
args = [
"eks",
"get-token",
"--cluster-name",
module.eks.cluster_id
]
}
}
Assuming you've got the network setup, here I am relying on implicit dependencies rather than an explicit one.
module "eks" {
source = "./modules/ekscluster"
clustername = module.clustername.cluster_name
eks_version = var.eks_version
private_subnets = module.networking.private_subnets_id
vpc_id = module.networking.vpc_id
environment = var.environment
instance_types = var.instance_types
}
Creating k8s resources using the depends_on attribute.
resource "kubernetes_namespace_v1" "app" {
metadata {
annotations = {
name = var.org_name
}
name = var.org_name
}
depends_on = [module.eks]
}
I have a Terraform module where I wanted to refactor the EFS so it is managed in the module rather than in the Terraform I presently have. Presently my Terraform includes two VPCs sharing an EFS using VPC Peering connection.
Eventually I want to be rid of old VPC but the EFS is still held by the old VPC. EFS does not allow creating aws_efs_mount_target on a different VPC.
resource "aws_efs_mount_target" "main" {
for_each = toset(module.docker-swarm.subnets)
file_system_id = aws_efs_file_system.main.id
subnet_id = each.key
}
So I was wondering, is it possible to set something along the lines of
disable {
module.common.aws_efs_mount_target.main
}
or
module "common" {
exclude = [ "aws_efs_mount_target.main " ]
}
This is how I solved it. Basically create a new variable and use that as a condition to make an empty set.
resource "aws_efs_mount_target" "main" {
for_each = var.skip_efs_mount_target ? [] : toset(module.docker-swarm.subnets)
file_system_id = aws_efs_file_system.main.id
subnet_id = each.key
}
I'm new Terraform and I'm working on an infrastructure setup for deploying Docker Containers. I've based my ECS Cluster off Infrablocks/ECS-Cluster and my Base Networking on Infrablocks/Base-Network. I've opted to use these due to time constraints on the project.
The problem I'm having is that the two EC2 Container Instances that are created by Infrablocks/ECS-Cluster module are not associated with ECS Cluster that Infrablocks builds. I've had zero luck determining why. This is blocking my task definitions from being able to run containers in the ECS Cluster because there are no associated EC2 Instances. I've provided my two dependent module configurations below.
Thank you in advance for any help you can provide!
My Terraform thus far:
module "base_network" {
source = "infrablocks/base-networking/aws"
version = "2.3.0"
vpc_cidr = "10.0.0.0/16"
region = "us-east-1"
availability_zones = ["us-east-1a", "us-east-1b"]
component = "dev-base-network"
deployment_identifier = "development"
include_route53_zone_association = "true"
private_zone_id = module.route53.private_zone_id
include_nat_gateway = "true"}
module "ecs_cluster" {
source = "infrablocks/ecs-cluster/aws"
version = "2.2.0"
region = "us-east-1"
vpc_id = module.base_network.vpc_id
subnet_ids = module.base_network.public_subnet_ids
associate_public_ip_addresses = "yes"
component = "dev"
deployment_identifier = "devx"
cluster_name = "services"
cluster_instance_ssh_public_key_path = "~/.ssh/id_rsa.pub"
cluster_instance_type = "t2.small"
cluster_minimum_size = 2
cluster_maximum_size = 10
cluster_desired_capacity = 2 }
You'd have to troubleshoot the instance to see why it isn't joining the cluster. On your EC2 instances (which, I have not looked, but I would hope that the "infrablocks" ecs-cluster module uses an AMI with the ECS agent installed), you can look in /var/log/ecs/ecs-agent.log .
If the networking configuration is sound, my first guess would be to check the ECS configuration file. If your module is working properly, it should have populated the config with the cluster name. See here for more on that
(I would have commented instead of answered but this account doesn't have enough rep :shrug:)
I would like to launch an EC2 instance without key pair with my Terraform configuration. I could not find any info on internet which indicates usage of "no keypair" in Terraform. Anyone who have configured Terraform to be used this way?
Here's Terraform script that makes an EC2 instance of type t2.micro without a key and output its IP address.
terraform.tf:
provider "aws" {
profile = "default"
region = "us-west-2"
}
variable "instance_type" {
default = "t2.micro"
}
resource "aws_instance" "ec2_instance" {
ami = "ami-0d1cd67c26f5fca19"
instance_type = "var.instance_type"
}
output "ip" {
value = "aws_instance.ec2_instance.public_ip"
}
Put it on a dir and run it by using this command terraform apply.
You can use terraform plan for testing it.
Note: Don't forget to add your access_key and secret_key to your local aws configuration (aws configure) in order for it to work. You can also use aws-vault to avoid mistakenly exposing your credentials.
I'm brand new to Terraform so I'm sure i'm missing something, but the answers i'm finding don't seem to be asking the same question I have.
I have an AWS VPC/Security Group that we need our EC2 instances to be created under and this VPC/SG is already created. To create an EC2 instance, Terraform requires that if I don't have a default VPC, I must import my own. But once I import and apply my plan, when I wish to destroy it, its trying to destroy my VPC as well. How do I encapsulate my resources so when I run "terraform apply", I can create an EC2 instance with my imported VPC, but when I run "terraform destroy" I only destroy my EC2 instance?
In case anyone wants to mention, I understand that:
lifecycle = {
prevent_destroy = true
}
is not what I'm looking for.
Here is my current practice code.
resource "aws_vpc" "my_vpc" {
cidr_block = "xx.xx.xx.xx/24"
}
provider "aws" {
region = "us-west-2"
}
data "aws_ami" "ubuntu" {
most_recent = true
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-bionic-18.04-amd64-server-*"]
}
owners = ["099720109477"] # Canonical
}
resource "aws_instance" "web" {
ami = "${data.aws_ami.ubuntu.id}"
instance_type = "t3.nano"
vpc_security_group_ids = ["sg-0e27d851dxxxxxxxxxx"]
subnet_id = "subnet-0755c2exxxxxxxx"
tags = {
Name = "HelloWorld"
}
}
Terraform should not require you to deploy or import a VPC in order to deploy an EC2 instance into it. You should be able to reference the VPC, subnets and security groups by id so TF is aware of your existing network infrastructure just like you've already done for SGs and subnets. All you should need to deploy the EC2 instance "aws_instance" is give it an existing subnet id in the existing VPC like you already did. Why do you say deploying or importing a VPC is required by Terraform? What error or issue do you have deploying without the VPC and just using the existing one?
You can protect the VPC through AWS if you really wanted to, but I don't think you really want to import the VPC into your Terraform state and let Terraform manage it here. Sounds like you want the VPC to service other resources, maybe applications manually deployed or through other TF stacks, and the VPC to live independent of anyone application deployment.
To answer the original question, you can use a data source and match your VPC by id or tag name :
data "aws_vpc" "main" {
tags = {
Name = "main_vpc"
}
}
Or
data "aws_vpc" "main" {
id = "vpc-nnnnnnnn"
}
Then refer to it with : data.aws_vpc.main
Also, if you already included your VPC but would like not to destroy it while remove it from your state, you can manage to do it with the terraform state command : https://www.terraform.io/docs/commands/state/index.html