copy or share aws ami across account via terraform - amazon-web-services

i am creating aws ami using packer and trying to copy or share aws ami across account via terraform.
ami is present in mumbai region ap-south-1 and i want to copy to hyd region ap-south-2 with tags intact .
i was checking https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ami_copy
resource "aws_ami_copy" "example" {
name = "terraform-example"
description = "A copy of ami-xxxxxxxx"
source_ami_id = "ami-xxxxxxxx"
source_ami_region = "us-west-1"
tags = {
Name = "HelloWorld"
}
}

You can use ami_users directly from packer to share the image with other accounts, and you can use ami_regions to copy the image to different regions.

Related

AWS - Creating resources in a multi-account environment

I just created a new AWS account using Terraform aws_organizations_account module. What I am now trying to do is to create ressources into that new account. I guess I would need the account_id of the new AWS account to do that so I stored it into a new output variable but after that I have no idea how can I create a aws_s3_bucket for example
provider.tf
provider "aws" {
region = "us-east-1"
}
main.tf
resource "aws_organizations_account" "account" {
name = "tmp"
email = "first.last+tmp#company.com"
role_name = "myOrganizationRole"
parent_id = "xxxxx"
}
## what I am trying to create inside that tmp account
resource "aws_s3_bucket" "bucket" {}
outputs.tf
output "account_id" {
value = aws_organizations_account.account.id
sensitive = true
}
You can't do this the way you want. You need entire, account creation pipeline for that. Roughly in the pipeline you would have two main stages:
Create your AWS Org and member accounts.
Assume role from the member accounts, and run your TF code for these accounts to create resources.
There are many ways of doing this, and also there are many resources on this topic. Some of them are:
How to Build an AWS Multi-Account Strategy with Centralized Identity Management
Setting up an AWS organization from scratch with Terraform
Terraform on AWS: Multi-Account Setup and Other Advanced Tips
Apart from those, there is also AWS Control Tower, which can be helpful in setting up initial multi-account infrastructure.

Attach IAM role to existing EC2 instance using terraform

I am trying to attach an IAM role to EC2 instance using terraform. But after looking out on some web pages.. I found that the attaching can be done at the time of creating ec2 instance.
resource "aws_instance" "web" {
ami = data.aws_ami.ubuntu.id
instance_type = "t3.micro"
iam_instance_profile = "${aws_iam_instance_profile.ec2_profile.name}"
tags = {
Name = "HelloWorld"
}
}
As in the above part , it can be clearly seen that, AMI is being passed which will create a new instance.
Is it somehow possible that instead of using AMI id, we can provide instance it, so that it can attach role to that instance?
I found out one link from terraform community pointing out that this feature is not yet released.
https://github.com/hashicorp/terraform/issues/11852
Please provide inputs on how to accomplish this task.
Thanks in advance
As you pointed out this is not supported. But if you really want to use terraform for that you could consider two options:
Use local-exec which would use AWS CLI associate-iam-instance-profile to attach the role to an existing instance.
Use aws_lambda_invocation. This way you could invoke a custom lambda function from your terraform which would use AWS SDK to associate profile with the instance. For example, for boto3 the method is associate_iam_instance_profile.

How to block Terraform from deleting an imported resource?

I'm brand new to Terraform so I'm sure i'm missing something, but the answers i'm finding don't seem to be asking the same question I have.
I have an AWS VPC/Security Group that we need our EC2 instances to be created under and this VPC/SG is already created. To create an EC2 instance, Terraform requires that if I don't have a default VPC, I must import my own. But once I import and apply my plan, when I wish to destroy it, its trying to destroy my VPC as well. How do I encapsulate my resources so when I run "terraform apply", I can create an EC2 instance with my imported VPC, but when I run "terraform destroy" I only destroy my EC2 instance?
In case anyone wants to mention, I understand that:
lifecycle = {
prevent_destroy = true
}
is not what I'm looking for.
Here is my current practice code.
resource "aws_vpc" "my_vpc" {
cidr_block = "xx.xx.xx.xx/24"
}
provider "aws" {
region = "us-west-2"
}
data "aws_ami" "ubuntu" {
most_recent = true
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-bionic-18.04-amd64-server-*"]
}
owners = ["099720109477"] # Canonical
}
resource "aws_instance" "web" {
ami = "${data.aws_ami.ubuntu.id}"
instance_type = "t3.nano"
vpc_security_group_ids = ["sg-0e27d851dxxxxxxxxxx"]
subnet_id = "subnet-0755c2exxxxxxxx"
tags = {
Name = "HelloWorld"
}
}
Terraform should not require you to deploy or import a VPC in order to deploy an EC2 instance into it. You should be able to reference the VPC, subnets and security groups by id so TF is aware of your existing network infrastructure just like you've already done for SGs and subnets. All you should need to deploy the EC2 instance "aws_instance" is give it an existing subnet id in the existing VPC like you already did. Why do you say deploying or importing a VPC is required by Terraform? What error or issue do you have deploying without the VPC and just using the existing one?
You can protect the VPC through AWS if you really wanted to, but I don't think you really want to import the VPC into your Terraform state and let Terraform manage it here. Sounds like you want the VPC to service other resources, maybe applications manually deployed or through other TF stacks, and the VPC to live independent of anyone application deployment.
To answer the original question, you can use a data source and match your VPC by id or tag name :
data "aws_vpc" "main" {
tags = {
Name = "main_vpc"
}
}
Or
data "aws_vpc" "main" {
id = "vpc-nnnnnnnn"
}
Then refer to it with : data.aws_vpc.main
Also, if you already included your VPC but would like not to destroy it while remove it from your state, you can manage to do it with the terraform state command : https://www.terraform.io/docs/commands/state/index.html

Terraform set AMI permissions to public

I'm currently using Terraform to copy an AMI from one region to multiple regions using:
resource "aws_ami_copy" "my_ami" {
name = "my_ami-${var.region}"
source_ami_id = "${var.source_ami_id}"
source_ami_region = "${var.source_ami_region}"
}
I need to make this AMI public, I've looked online and I can't find a way to do this using Terraform.
Terraform doesn't have a native way to do this currently. You can normally use Terraform to share an AMI with another account using the aws_ami_launch_permission resource but this only supports adding specific accounts and not the all group required for making it public.
You could always use a local-exec provisioner to shell out to the AWS CLI to make the AMI public with something like:
resource "null_resource" "share_ami_publicly" {
provisioner "local-exec" {
command = "aws ec2 modify-image-attribute --image-id ami-12345678 --launch-permission '{\"Add\":[{\"Group\":\"all\"}]}'"
}
}
Where the provisioner could be attached to any relevant resource (such as the aws_ami resource if you are using that to create AMIs).

Terraform state locking using DynamoDB

Our Terraform layout is such that we run Terraform for many aws (100+) accounts, and save Terraform state file remotely to a central S3 bucket.
The new locking feature sounds useful and wish to implement it but I am unsure if I can make use of a central DynamoDB table in the same account as that of our S3 bucket or do I need to create a DynamoDB table in each of the AWS accounts?
You can use a single DynamoDB table to control the locking for the state file for all of the accounts. This would work even if you had multiple S3 buckets to store state in.
The DynamoDB table is keyed on LockID which is set as a bucketName/path. So as long as you have a unique combination of those you will be fine (you should or you have bigger problems with your state management).
Obviously you will need to set up cross account IAM policies to allow users creating things in one account to be able to manage items in DynamoDB.
To use terraform DynamoDB locking, follow the steps below
1.Create an AWS DynamoDB with terraform to lock the terraform.tfstate.
provider "aws" {
region = "us-east-2"
}
resource "aws_dynamodb_table" "dynamodb-terraform-lock" {
name = "terraform-lock"
hash_key = "LockID"
read_capacity = 20
write_capacity = 20
attribute {
name = "LockID"
type = "S"
}
tags {
Name = "Terraform Lock Table"
}
}
2.Execute terraform to create the DynamoDB table on AWS
terraform apply
Usage Example
1.Use the DynamoDB table to lock terraform.state creation on AWS. As an EC2 example
terraform {
backend "s3" {
bucket = "terraform-s3-tfstate"
region = "us-east-2"
key = "ec2-example/terraform.tfstate"
dynamodb_table = "terraform-lock"
encrypt = true
}
}
provider "aws" {
region = "us-east-2"
}
resource "aws_instance" "ec2-example" {
ami = "ami-a4c7edb2"
instance_type = "t2.micro"
}
The dynamodb_table value must match the name of the DynamoDB table we created.
2.Initialize the terraform S3 and DynamoDB backend
terraform init
3.Execute terraform to create EC2 server
terraform apply
To see the code, go to the
Github DynamoDB Locking Example