Terraform doesn't seem to be able to create AWS private hosted Route53 zones, and dies with the following error when I try to create a new hosted private zone associated with an existing VPC:
Error applying plan:
1 error(s) occurred:
aws_route53_zone.analytics: InvalidVPCId: The VPC: vpc-xxxxxxx you provided is not authorized to make the association.
status code: 400, request id: b411af23-0187-11e7-82e3-df8a3528194f
Here's my .tf file:
provider "aws" {
region = "${var.region}"
profile = "${var.environment}"
}
variable "vpcid" {
default = "vpc-xxxxxx"
}
variable "region" {
default = "eu-west-1"
}
variable "environment" {
default = "dev"
}
resource "aws_route53_zone" "analytics" {
vpc_id = "${var.vpcid}"
name = "data.int.example.com"
}
I'm not sure if the error is referring to either one of these:
VPC somehow needs to be authorised to associate with the Zone in advance.
The aws account running the terraform needs correct IAM permissions to associate the zone with the vpc
Would anyone have a clue how I could troubleshoot this further?
some times you also face such issue when the aws region which is configured in provider config is different then the region in which you have VPC deployed. for such cases we can use alias for aws provider. like below:
provider "aws" {
region = "us-east-1"
}
provider "aws" {
region = "ap-southeast-1"
alias = "singapore"
}
then we can use it as below in terraform resources:
resource "aws_route53_zone_association" "vpc_two" {
provider = "aws.singapore"
zone_id = "${aws_route53_zone.dlos_vpc.zone_id}"
vpc_id = "${aws_vpc.vpc_two.id}"
}
above snippet is helpful when you need your terraform script to do deployment in multiple regions.
check the terraform version if run with latest or not.
Second, your codes are wrong if compare with the sample
data "aws_route53_zone" "selected" {
name = "test.com."
private_zone = true
}
resource "aws_route53_record" "www" {
zone_id = "${data.aws_route53_zone.selected.zone_id}"
name = "www.${data.aws_route53_zone.selected.name}"
type = "A"
ttl = "300"
records = ["10.0.0.1"]
}
The error code you're getting is because either your user/role doesn't have the necessary VPC related permissions or you are using the wrong VPC id.
I'd suggest you double check the VPC id you are using, potentially using the VPC data source to fetch it:
# Assuming you use the "Name" tag on the VPC resource to identify your VPCs
variable "vpc_name" {}
data "aws_vpc" "selected" {
tags {
Name = "${var.vpc_name}"
}
}
resource "aws_route53_zone" "analytics" {
vpc_id = "${data.aws_vpc.selected.id}"
name = "data.int.example.com"
}
You also want to check that your user/role has the necessary VPC related permissions. For this you'll probably want all of the permissions listed in the docs:
Related
I have created a policy X with ec2 and vpc full access and attached to userA. userA has console access. So, using switch role userA can create instance from console.
Now, userB has programatic access with policy Y with ec2 and vpc full access. But when I tried to create instance using Terraform got error.
Error: creating Security Group (allow-80-22): UnauthorizedOperation: You are not authorized to perform this operation. Encoded authorization failure message:
Even - aws ec2 describe-instances
gives error -
An error occurred (UnauthorizedOperation) when calling the DescribeInstances operation: You are not authorized to perform this operation.
Anyone can help me on this.
Thanks in advance.
To be honest there are a couple of mistakes in the question itself but I have ignored them and provided a solution to
Create Resources using IAM user with only programmatic access having direct policies attached to it
In general, if you have an AWS IAM user who has programmatic access and already has the required policies attached to it then it's pretty straightforward to create any resources within the permissions. Like any normal use case.
Create Resources using IAM user with only programmatic access with assuming a role that has required policies attached to it(role only)
providers.tf
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}
}
}
## If you hardcoded the role_arn then it is not required to have two provider configs(one with hardcoded value is enough without any alias).
provider "aws" {
region = "eu-central-1"
}
provider "aws" {
alias = "ec2_and_vpc_full_access"
region = "eu-central-1"
assume_role {
role_arn = data.aws_iam_role.stackoverflow.arn
}
}
resources.tf
/*
!! Important !!
* Currently the AWS secrets(AWS_ACCESS_KEY_ID & AWS_SECRET_ACCESS_KEY) used for authentication to terraform is
* from the user which has direct AWS managed policy [IAMFullAccess] attached to it to read role arn.
*/
# If you have hardcoded role_arn in the provider config this can be ignored and no usage of alias provider config is required
## using default provider to read the role.
data "aws_iam_role" "stackoverflow" {
name = "stackoverflow-ec2-vpc-full-access-role"
}
# Using provider with the role having aws managed policies [ec2 and vpc full access] attached
data "aws_vpc" "default" {
provider = aws.ec2_and_vpc_full_access
default = true
}
# Using provider with the role having AWS managed policies [ec2 and vpc full access] attached
resource "aws_key_pair" "eks_jump_host" {
provider = aws.ec2_and_vpc_full_access
key_name = "ec2keypair"
public_key = file("${path.module}/../../ec2keypair.pub")
}
# Example from https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/instance
# Using provider with the role having aws managed policies [ec2 and vpc full access] attached
data "aws_ami" "ubuntu" {
provider = aws.ec2_and_vpc_full_access
most_recent = true
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
owners = ["099720109477"] # Canonical
}
# Using provider with the role having aws managed policies [ec2 and vpc full access] attached
resource "aws_instance" "terraform-ec2" {
provider = aws.ec2_and_vpc_full_access
ami = data.aws_ami.ubuntu.id
instance_type = "t2.micro"
key_name = "ec2keypair"
security_groups = [aws_security_group.t-allow_tls.name]
}
# Using provider with the role having aws managed policies [ec2 and vpc full access] attached
resource "aws_security_group" "t-allow_tls" {
provider = aws.ec2_and_vpc_full_access
name = "allow-80-22"
description = "Allow TLS inbound traffic"
vpc_id = data.aws_vpc.default.id
ingress {
description = "http"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}
}
For a full solution refer to Github Repo, I hope this is something you were looking and helps.
I would like to store a terraform state file in one aws account and deploy infrastructure into another. Is it possible to provide different set of credentials for backend and aws provider using environmental variables(AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY)? Or maybe provide credentials to one with environmental variables and another through shared_credentials_file?
main.tf
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "=3.74.3"
}
}
backend "s3" {
encrypt = true
bucket = "bucket-name"
region = "us-east-1"
key = "terraform.tfstate"
}
}
variable "region" {
default = "us-east-1"
}
provider "aws" {
region = "${var.region}"
}
resource "aws_vpc" "test" {
cidr_block = "10.0.0.0/16"
}
Yes, the AWS profile/access keys configuration used by the S3 backend are separate from the AWS profile/access keys configuration used by the AWS provider. By default they are both going to be looking in the same place, but you could configure the backend to use a different profile so that it connects to a different AWS account.
Yes, and you can even keep them in separated files in the same folder to avoid confusion
backend.tf
terraform {
backend "s3" {
profile = "profile-1"
region = "eu-west-1"
bucket = "your-bucket"
key = "terraform-state/terraform.tfstate"
dynamodb_table = "terraform-locks"
encrypt = true
}
}
main.tf
provider "aws" {
profile = "profile-2"
region = "us-east-1"
}
resource .......
This way, the state file will be stored in the profile-1, and all the code will run in the profile-2
I am new to terraform and still learing.
I know there is way you can import your existing infrastructure to terraform and have the state file created. But As if now I have multiple AWS accounts and multiple regions in those accounts which have multiple VPCs.
My task is to create vpc flow log through terraform.
Is it possible?
If it is, could you please help me or direct me how to get this thing done.
It is possible, albeit a little messy. You will need to create a provider block (with a unique alias) for each account/region combination you have (you can use profiles but I think roles is best), and select those providers in your resources appropriately.
provider "aws" {
alias = "acct1uswest2"
region = "us-west-2"
assume_role {
role_arn = "arn:aws:iam::ACCOUNT_ID:role/ROLE_NAME"
}
}
provider "aws" {
alias = "acct2useast1"
region = "us-east-1"
assume_role {
role_arn = "arn:aws:iam::ACCOUNT_ID:role/ROLE_NAME"
}
}
resource "aws_flow_log" "flow1" {
provider = aws.acct1uswest2
vpc_id = "vpc id in account 1" # you mentioned the vpc already exists, so you can either import the vpc and reference it's .id attribute here, or just put the id here as a string
...
}
resource "aws_flow_log" "flow2" {
provider = aws.acct2useast1
vpc_id = "vpc id in account 2"
...
}
Suggest you read up on importing resources (and the implications) here
More on multiple providers here
I am trying to setup AWS SFTP transfer in vpc endpoint mode but there is one think I can't manage with.
The problem I have is how to get target IPs for NLB target group.
The only output I found:
output "vpc_endpoint_transferserver_network_interface_ids" {
description = "One or more network interfaces for the VPC Endpoint for transferserver"
value = flatten(aws_vpc_endpoint.transfer_server.*.network_interface_ids)
}
gives network interface ids which cannot be used as targets:
Outputs:
api_url = https://12345.execute-api.eu-west-1.amazonaws.com/prod
vpc_endpoint_transferserver_network_interface_ids = [
"eni-12345",
"eni-67890",
"eni-abcde",
]
I went through:
terraform get subnet integration ips from vpc endpoint subnets tab
and
Terraform how to get IP address of aws_lb
but none of them seems to be working. The latter says:
on modules/sftp/main.tf line 134, in data "aws_network_interface" "ifs":
134: count = "${length(local.nlb_interface_ids)}"
The "count" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
To work around this, use the -target argument to first apply only the
resources that the count depends on.
You can create an Elastic IP
resource "aws_eip" "lb" {
instance = "${aws_instance.web.id}"
vpc = true
}
Then specify the Elastic IPs while creating Network LB
resource "aws_lb" "example" {
name = "example"
load_balancer_type = "network"
subnet_mapping {
subnet_id = "${aws_subnet.example1.id}"
allocation_id = "${aws_eip.example1.id}"
}
subnet_mapping {
subnet_id = "${aws_subnet.example2.id}"
allocation_id = "${aws_eip.example2.id}"
}
}
I'm trying to provision an EC2 instance in an existing and specific VPC by providing an AWS region to Terraform.
I want to be able to automatically choose a specific vpc using regex or some other method, this is my tf file.
can anybody help me?
the VPC I want to be chosen automatically has a prefix "digital".
so instead of providing its name in here -> name = "tag:${local.env_profile}-vpc"
I want to provide only the region and then to get this specific VPC using regex.
provider "aws" {
region = "eu-west-3"
shared_credentials_file = "${var.shared_credentials_file}"
profile = "${var.profile}"
}
data "aws_vpc" "selected" {
filter {
name = "tag:${local.env_profile}-vpc"
values = ["${local.env_profile}-vpc"]
}
}
resource "aws_instance" "general" {
ami = "ami-00035f41c82244dab"
instance_type = "${var.type}"
vpc = "${data.aws_vpc.selected.id}"
key_name = "${var.key_name}"
tags {
Name = "empty"
}
}
I don't think there is a way to select a random VPC via regex in terraform. You can check the data source for VPC here https://www.terraform.io/docs/providers/aws/d/vpc.html
You can refer to a VPC . by using the ID and the name.