Terraform set AMI permissions to public - amazon-web-services

I'm currently using Terraform to copy an AMI from one region to multiple regions using:
resource "aws_ami_copy" "my_ami" {
name = "my_ami-${var.region}"
source_ami_id = "${var.source_ami_id}"
source_ami_region = "${var.source_ami_region}"
}
I need to make this AMI public, I've looked online and I can't find a way to do this using Terraform.

Terraform doesn't have a native way to do this currently. You can normally use Terraform to share an AMI with another account using the aws_ami_launch_permission resource but this only supports adding specific accounts and not the all group required for making it public.
You could always use a local-exec provisioner to shell out to the AWS CLI to make the AMI public with something like:
resource "null_resource" "share_ami_publicly" {
provisioner "local-exec" {
command = "aws ec2 modify-image-attribute --image-id ami-12345678 --launch-permission '{\"Add\":[{\"Group\":\"all\"}]}'"
}
}
Where the provisioner could be attached to any relevant resource (such as the aws_ami resource if you are using that to create AMIs).

Related

copy or share aws ami across account via terraform

i am creating aws ami using packer and trying to copy or share aws ami across account via terraform.
ami is present in mumbai region ap-south-1 and i want to copy to hyd region ap-south-2 with tags intact .
i was checking https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ami_copy
resource "aws_ami_copy" "example" {
name = "terraform-example"
description = "A copy of ami-xxxxxxxx"
source_ami_id = "ami-xxxxxxxx"
source_ami_region = "us-west-1"
tags = {
Name = "HelloWorld"
}
}
You can use ami_users directly from packer to share the image with other accounts, and you can use ami_regions to copy the image to different regions.

Attach IAM role to existing EC2 instance using terraform

I am trying to attach an IAM role to EC2 instance using terraform. But after looking out on some web pages.. I found that the attaching can be done at the time of creating ec2 instance.
resource "aws_instance" "web" {
ami = data.aws_ami.ubuntu.id
instance_type = "t3.micro"
iam_instance_profile = "${aws_iam_instance_profile.ec2_profile.name}"
tags = {
Name = "HelloWorld"
}
}
As in the above part , it can be clearly seen that, AMI is being passed which will create a new instance.
Is it somehow possible that instead of using AMI id, we can provide instance it, so that it can attach role to that instance?
I found out one link from terraform community pointing out that this feature is not yet released.
https://github.com/hashicorp/terraform/issues/11852
Please provide inputs on how to accomplish this task.
Thanks in advance
As you pointed out this is not supported. But if you really want to use terraform for that you could consider two options:
Use local-exec which would use AWS CLI associate-iam-instance-profile to attach the role to an existing instance.
Use aws_lambda_invocation. This way you could invoke a custom lambda function from your terraform which would use AWS SDK to associate profile with the instance. For example, for boto3 the method is associate_iam_instance_profile.

How to launch an EC2 instance without key pair via Terraform?

I would like to launch an EC2 instance without key pair with my Terraform configuration. I could not find any info on internet which indicates usage of "no keypair" in Terraform. Anyone who have configured Terraform to be used this way?
Here's Terraform script that makes an EC2 instance of type t2.micro without a key and output its IP address.
terraform.tf:
provider "aws" {
profile = "default"
region = "us-west-2"
}
variable "instance_type" {
default = "t2.micro"
}
resource "aws_instance" "ec2_instance" {
ami = "ami-0d1cd67c26f5fca19"
instance_type = "var.instance_type"
}
output "ip" {
value = "aws_instance.ec2_instance.public_ip"
}
Put it on a dir and run it by using this command terraform apply.
You can use terraform plan for testing it.
Note: Don't forget to add your access_key and secret_key to your local aws configuration (aws configure) in order for it to work. You can also use aws-vault to avoid mistakenly exposing your credentials.

How to block Terraform from deleting an imported resource?

I'm brand new to Terraform so I'm sure i'm missing something, but the answers i'm finding don't seem to be asking the same question I have.
I have an AWS VPC/Security Group that we need our EC2 instances to be created under and this VPC/SG is already created. To create an EC2 instance, Terraform requires that if I don't have a default VPC, I must import my own. But once I import and apply my plan, when I wish to destroy it, its trying to destroy my VPC as well. How do I encapsulate my resources so when I run "terraform apply", I can create an EC2 instance with my imported VPC, but when I run "terraform destroy" I only destroy my EC2 instance?
In case anyone wants to mention, I understand that:
lifecycle = {
prevent_destroy = true
}
is not what I'm looking for.
Here is my current practice code.
resource "aws_vpc" "my_vpc" {
cidr_block = "xx.xx.xx.xx/24"
}
provider "aws" {
region = "us-west-2"
}
data "aws_ami" "ubuntu" {
most_recent = true
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-bionic-18.04-amd64-server-*"]
}
owners = ["099720109477"] # Canonical
}
resource "aws_instance" "web" {
ami = "${data.aws_ami.ubuntu.id}"
instance_type = "t3.nano"
vpc_security_group_ids = ["sg-0e27d851dxxxxxxxxxx"]
subnet_id = "subnet-0755c2exxxxxxxx"
tags = {
Name = "HelloWorld"
}
}
Terraform should not require you to deploy or import a VPC in order to deploy an EC2 instance into it. You should be able to reference the VPC, subnets and security groups by id so TF is aware of your existing network infrastructure just like you've already done for SGs and subnets. All you should need to deploy the EC2 instance "aws_instance" is give it an existing subnet id in the existing VPC like you already did. Why do you say deploying or importing a VPC is required by Terraform? What error or issue do you have deploying without the VPC and just using the existing one?
You can protect the VPC through AWS if you really wanted to, but I don't think you really want to import the VPC into your Terraform state and let Terraform manage it here. Sounds like you want the VPC to service other resources, maybe applications manually deployed or through other TF stacks, and the VPC to live independent of anyone application deployment.
To answer the original question, you can use a data source and match your VPC by id or tag name :
data "aws_vpc" "main" {
tags = {
Name = "main_vpc"
}
}
Or
data "aws_vpc" "main" {
id = "vpc-nnnnnnnn"
}
Then refer to it with : data.aws_vpc.main
Also, if you already included your VPC but would like not to destroy it while remove it from your state, you can manage to do it with the terraform state command : https://www.terraform.io/docs/commands/state/index.html

Added existing IAM role to EC2 instance using Terraform

I have an existing IAM role setup (not created suing Terraform), i need to add this to a an EC2 instance I have built using Terraform.
I have tried various options of using the aws_iam_role, iam_instance_profile commands but cannot get this to work.
Import your existing role into Terraform.
terraform import aws_iam_role.my_role_identifier my_existing_role_name
Create an instance profile from this role.
resource "aws_iam_instance_profile" "my_instance_profile" {
name = "my_instance_profile"
role = "${aws_iam_role.my_role_identifier}"
}
Pass this instance profile to your instance.
resource "aws_instance" "my_instance" {
...
iam_instance_profile = "${aws_iam_instance_profile.my_instance_profile.name}"
}
Thanks all. In the end I cloned my role using terraform, assigned it to the instance and removed the original.