I would like to launch an EC2 instance without key pair with my Terraform configuration. I could not find any info on internet which indicates usage of "no keypair" in Terraform. Anyone who have configured Terraform to be used this way?
Here's Terraform script that makes an EC2 instance of type t2.micro without a key and output its IP address.
terraform.tf:
provider "aws" {
profile = "default"
region = "us-west-2"
}
variable "instance_type" {
default = "t2.micro"
}
resource "aws_instance" "ec2_instance" {
ami = "ami-0d1cd67c26f5fca19"
instance_type = "var.instance_type"
}
output "ip" {
value = "aws_instance.ec2_instance.public_ip"
}
Put it on a dir and run it by using this command terraform apply.
You can use terraform plan for testing it.
Note: Don't forget to add your access_key and secret_key to your local aws configuration (aws configure) in order for it to work. You can also use aws-vault to avoid mistakenly exposing your credentials.
Related
Terraform CODE:
provider "aws" {
region = "us-east-1"
access_key = "*********************"
secret_key = "***************************"
}
resource "aws_vpc" "mainvpc" {
cidr_block = "10.1.0.0/16"
tags = {
Name = "example"
}
}
resource "aws_subnet" "main" {
vpc_id = aws_vpc.mainvpc.id
cidr_block = "10.0.1.0/24"
tags = {
Name = "Main"
}
}
Terminal Output:
\`PS C:\\Users\\surya\\Documents\\Terraform_project\> terraform init
Initializing the backend...
Initializing provider plugins...
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
PS C:\\Users\\surya\\Documents\\Terraform_project\> terraform plan
No changes. Your infrastructure matches the configuration.
Terraform has compared your real infrastructure against your configuration and found no differences, so no changes are needed.
PS C:\\Users\\surya\\Documents\\Terraform_project\> terraform apply
No changes. Your infrastructure matches the configuration.
Terraform has compared your real infrastructure against your configuration and found no differences, so no changes are needed.
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.\`
I want to create a EKS cluster using Terraform, build custom docker images and then perform Kubernetes deployments on the created cluster via terraform. I want to perform all the tasks with a single terraform apply. But I see that the kubernetes provider needs the details of cluster on initialization itself. Is there a way I can achieve both cluster creation and deployment using a single terraform apply, so that once the cluster is created, the cluster details can be passed to Kubernetes provider and then the pods are deployed.
Please let me know how I can achieve this?
I am doing this with ease , below is the pseudo code , you just need to be careful with the way you are using depends_on attribute with resources and try to encapsulate as much as possible
Kubernetes Provider in a separate file like kubernetes.tf
host = module.eks.cluster_endpoint
cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)
exec {
api_version = "client.authentication.k8s.io/v1alpha1"
command = "aws"
args = [
"eks",
"get-token",
"--cluster-name",
module.eks.cluster_id
]
}
}
Assuming you've got the network setup, here I am relying on implicit dependencies rather than an explicit one.
module "eks" {
source = "./modules/ekscluster"
clustername = module.clustername.cluster_name
eks_version = var.eks_version
private_subnets = module.networking.private_subnets_id
vpc_id = module.networking.vpc_id
environment = var.environment
instance_types = var.instance_types
}
Creating k8s resources using the depends_on attribute.
resource "kubernetes_namespace_v1" "app" {
metadata {
annotations = {
name = var.org_name
}
name = var.org_name
}
depends_on = [module.eks]
}
I am trying to spin up an ECS cluster with Terraform, but can not make EC2 instances register as container instances in the cluster.
I first tried with the verified module from Terraform, but this seems out dated (ecs-instance-profile has wrong path).
Then I tried with another module from anrim, but still no container instances. Here is the script I used:
provider "aws" {
region = "us-east-1"
}
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "2.21.0"
name = "ecs-alb-single-svc"
cidr = "10.10.10.0/24"
azs = ["us-east-1a", "us-east-1b", "us-east-1c"]
private_subnets = ["10.10.10.0/27", "10.10.10.32/27", "10.10.10.64/27"]
public_subnets = ["10.10.10.96/27", "10.10.10.128/27", "10.10.10.160/27"]
tags = {
Owner = "user"
Environment = "me"
}
}
module "ecs_cluster" {
source = "../../modules/cluster"
name = "ecs-alb-single-svc"
vpc_id = module.vpc.vpc_id
vpc_subnets = module.vpc.private_subnets
tags = {
Owner = "user"
Environment = "me"
}
}
I then created a new ecs cluster (from the aws console) on the same VPC and carefully compared the differences in resources. I managed to find some small differences, fixed them and tried again. But still no container instances!
A fork of the module is available here.
Can you see instances being created in the autoscaling group? If so, I'd suggest SSHing to one of them (either directly or using a bastion host, eg. see this module) and checking ECS agent logs. In my experience those problems are usually related to IAM policies, and that's pretty visible in logs but YMMV.
I'm brand new to Terraform so I'm sure i'm missing something, but the answers i'm finding don't seem to be asking the same question I have.
I have an AWS VPC/Security Group that we need our EC2 instances to be created under and this VPC/SG is already created. To create an EC2 instance, Terraform requires that if I don't have a default VPC, I must import my own. But once I import and apply my plan, when I wish to destroy it, its trying to destroy my VPC as well. How do I encapsulate my resources so when I run "terraform apply", I can create an EC2 instance with my imported VPC, but when I run "terraform destroy" I only destroy my EC2 instance?
In case anyone wants to mention, I understand that:
lifecycle = {
prevent_destroy = true
}
is not what I'm looking for.
Here is my current practice code.
resource "aws_vpc" "my_vpc" {
cidr_block = "xx.xx.xx.xx/24"
}
provider "aws" {
region = "us-west-2"
}
data "aws_ami" "ubuntu" {
most_recent = true
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-bionic-18.04-amd64-server-*"]
}
owners = ["099720109477"] # Canonical
}
resource "aws_instance" "web" {
ami = "${data.aws_ami.ubuntu.id}"
instance_type = "t3.nano"
vpc_security_group_ids = ["sg-0e27d851dxxxxxxxxxx"]
subnet_id = "subnet-0755c2exxxxxxxx"
tags = {
Name = "HelloWorld"
}
}
Terraform should not require you to deploy or import a VPC in order to deploy an EC2 instance into it. You should be able to reference the VPC, subnets and security groups by id so TF is aware of your existing network infrastructure just like you've already done for SGs and subnets. All you should need to deploy the EC2 instance "aws_instance" is give it an existing subnet id in the existing VPC like you already did. Why do you say deploying or importing a VPC is required by Terraform? What error or issue do you have deploying without the VPC and just using the existing one?
You can protect the VPC through AWS if you really wanted to, but I don't think you really want to import the VPC into your Terraform state and let Terraform manage it here. Sounds like you want the VPC to service other resources, maybe applications manually deployed or through other TF stacks, and the VPC to live independent of anyone application deployment.
To answer the original question, you can use a data source and match your VPC by id or tag name :
data "aws_vpc" "main" {
tags = {
Name = "main_vpc"
}
}
Or
data "aws_vpc" "main" {
id = "vpc-nnnnnnnn"
}
Then refer to it with : data.aws_vpc.main
Also, if you already included your VPC but would like not to destroy it while remove it from your state, you can manage to do it with the terraform state command : https://www.terraform.io/docs/commands/state/index.html
I'm currently using Terraform to copy an AMI from one region to multiple regions using:
resource "aws_ami_copy" "my_ami" {
name = "my_ami-${var.region}"
source_ami_id = "${var.source_ami_id}"
source_ami_region = "${var.source_ami_region}"
}
I need to make this AMI public, I've looked online and I can't find a way to do this using Terraform.
Terraform doesn't have a native way to do this currently. You can normally use Terraform to share an AMI with another account using the aws_ami_launch_permission resource but this only supports adding specific accounts and not the all group required for making it public.
You could always use a local-exec provisioner to shell out to the AWS CLI to make the AMI public with something like:
resource "null_resource" "share_ami_publicly" {
provisioner "local-exec" {
command = "aws ec2 modify-image-attribute --image-id ami-12345678 --launch-permission '{\"Add\":[{\"Group\":\"all\"}]}'"
}
}
Where the provisioner could be attached to any relevant resource (such as the aws_ami resource if you are using that to create AMIs).