Terraform cycle error using count with AWS API Gateway - amazon-web-services

I am getting Cycle errors when adding dependencies on the previous count index
I want to define an API path /test1/{id} using AWS Terraform with the id resource depending on the test1. If the resource is test1, the set the parent as the root API gateway resource.
locals {
resources = ["test1", "{id}"]
}
resource "aws_api_gateway_rest_api" "root_api" {
name = "dev-api"
}
resource "aws_api_gateway_resource" "dev_gateway_test_resources" {
count = length(local.resources)
path_part = local.resources[count.index]
parent_id = count.index == 0 ? aws_api_gateway_rest_api.root_api.root_resource_id : aws_api_gateway_resource.dev_gateway_test_resources[count.index - 1].id
rest_api_id = aws_api_gateway_rest_api.root_api.id
}
The response:
Error: Cycle: aws_api_gateway_resource.dev_gateway_test_resources[1], aws_api_gateway_resource.dev_gateway_test_resources[0]
I have tried the same logic using for_each however I still see the same error

What you have there is a resource which is referencing itself. As of today, this is not supported in Terraform, even if it can be easily seen that the resource does not actually has a reference to itself, but to another resource of the same type created in a loop. There is an issue for supporting self referencing: GitHub
Now, I don't know how many item can have your resources list, but in this case you would probably want to duplicate your aws_api_gateway_resource:
resource "aws_api_gateway_resource" "dev_gateway_test_root" {
path_part = "test"
parent_id = aws_api_gateway_rest_api.root_api.root_resource_id
rest_api_id = aws_api_gateway_rest_api.root_api.id
}
resource "aws_api_gateway_resource" "dev_gateway_test_resources" {
path_part = "{id}"
parent_id = aws_api_gateway_resource.dev_gateway_test_root.id
rest_api_id = aws_api_gateway_rest_api.root_api.id
}
What you want to achieve is to have an endpoint such as /test/{id}. From my experience API gateway endpoints tend to grow in width, meaning that you will have something like:
/
/test1
/{id}
/test2
/{id}
/test3
...
/testN
Instead of having a really lengthy endpoint, like /test/child1/child2/.../{id}.
You can have a loop for each level in the tree, they wont have self-referencing. You can not really have a loop between different levels of the tree.

Related

Terraform loop through multiple providers(accounts) - invokation through module

i have a use case where need help to use for_each to loop through multiple providers( AWS accounts & regions) and this is a module, the TF will be using hub and spoke model.
below is the TF Pseudo code i would like to achieve.
module.tf
---------
app_accounts = [
{ "account" : "53xxxx08", "app_vpc_id" : "vpc-0fxxxxxfec8", "role" : "xxxxxxx", "profile" : "child1"},
{ "account" : "53xxxx08", "app_vpc_id" : "vpc-0fxxxxxfec8", "role" : "xxxxxxx", "profile" : "child2"}
]
below are the provider and resource files, pleas ignore the variables and output files, as its not relevant here
provider.tf
------------
provider "aws" {
for_each = var.app_accounts
alias = "child"
profile = each.value.role
}
here is the main resouce block where i want to multiple child accounts against single master account, so i want to iterate through the loop
resource "aws_route53_vpc_association_authorization" "master" {
provider = aws.master
vpc_id = vpc_id
zone_id = zone_id
}
resource "aws_route53_zone_association" "child" {
provider = aws.child
vpc_id = vpc_id
zone_id = zone_id
}
any idea on how to achieve this, please? thanks in advance.
The typical way to achieve your goal in Terraform is to define a shared module representing the objects that should be present in a single account and then to call that module once for each account, passing a different provider configuration into each.
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
}
}
}
provider "aws" {
alias = "master"
# ...
}
provider "aws" {
alias = "example1"
profile = "example1"
}
module "example1" {
source = "./modules/account"
account = "53xxxx08"
app_vpc_id = "vpc-0fxxxxxfec8"
providers = {
aws = aws.example1
aws.master = aws.master
}
}
provider "aws" {
alias = "example2"
profile = "example2"
}
module "example2" {
source = "./modules/account"
account = "53xxxx08"
app_vpc_id = "vpc-0fxxxxxfec8"
providers = {
aws = aws.example2
aws.master = aws.master
}
}
The ./modules/account directory would then contain the resource blocks describing what should exist in each individual account. For example:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
configuration_aliases = [ aws, aws.master ]
}
}
}
variable "account" {
type = string
}
variable "app_vpc_id" {
type = string
}
resource "aws_route53_zone" "example" {
# (omitting the provider argument will associate
# with the default provider configuration, which
# is different for each instance of this module)
# ...
}
resource "aws_route53_vpc_association_authorization" "master" {
provider = aws.master
vpc_id = var.app_vpc_id
zone_id = aws_route53_zone.example.id
}
resource "aws_route53_zone_association" "child" {
provider = aws.master
vpc_id = var.app_vpc_id
zone_id = aws_route53_zone.example.id
}
(I'm not sure if you actually intended var.app_vpc_id to be the VPC specified for those zone associations, but my goal here is only to show the general pattern, not to show a fully-working example.)
Using a shared module in this way allows to avoid repeating the definitions for each account separately, and keeps each account-specific setting specified in only one place (either in a provider "aws" block or in a module block).
There is no way to make this more dynamic within the Terraform language itself, but if you expect to be adding and removing accounts regularly and want to make it more systematic then you could use code generation for the root module to mechanically produce the provider and module block for each account, to ensure that they all remain consistent and that you can update them all together in case you need to change the interface of the shared module in a way that will affect all of the calls.

How to use 'depends_on' with 'for_each' in terraform?

I've got a terraform plan that creates a number of resources in a for_each loop, and I need another resource to depend_on those first ones. How can I do it without having to explicitly list them?
Here's the first resource (AWS API Gateway resource):
locals {
apps = toset(["app1", "app2", "app3"])
}
resource "aws_api_gateway_integration" "lambda" {
for_each = local.apps
rest_api_id = aws_api_gateway_rest_api.primary.id
resource_id = aws_api_gateway_resource.lambda[each.key].id
http_method = aws_api_gateway_method.lambda_post[each.key].http_method
integration_http_method = "POST"
type = "AWS_PROXY"
uri = aws_lambda_function.lambda[each.key].invoke_arn
}
Now I need to wait for all the 3 apps integrations before creating an API Gateway deployment:
resource "aws_api_gateway_deployment" "primary" {
rest_api_id = aws_api_gateway_rest_api.primary.id
depends_on = [
aws_api_gateway_integration.lambda["app1"],
aws_api_gateway_integration.lambda["app2"],
aws_api_gateway_integration.lambda["app3"],
]
However the list of apps keeps growing and I don't want to maintain it manually over here. Everywhere else I can simply use for or for_each together with local.apps but I can't figure out how to dynamically build the list for depends_on? Any ideas?
You have to do it manually. TF docs clearly explain that depends_on must be explicitly defined:
You only need to explicitly specify a dependency
Dependencies in Terraform are always between static blocks (resource, data, and module blocks mainly) and not between individual instances of those objects.
Therefore listing individual instances like you did in your example is redundant:
depends_on = [
aws_api_gateway_integration.lambda["app1"],
aws_api_gateway_integration.lambda["app2"],
aws_api_gateway_integration.lambda["app3"],
]
The above is exactly equivalent to declaring a dependency on the resource as a whole:
depends_on = [
aws_api_gateway_integration.lambda,
]
The for_each argument itself typically has its own dependencies (e.g. local.apps in your example) and so Terraform needs to construct the dependency graph before evaluating for_each. This means that there is only one node in the dependency graph representing the entire resource (including its for_each expression), and the individual instances are not represented in the plan-time dependency graph at all.
There is another way:
Create a module with for_each approach called aws_api_gateway_integration
Modify module to support all parameters needed
Then you can just call new objects to create - example
At the end you can put depends_on on a module - depends_on = [
module.logic_app
]

How do I retrieve multiple vpc endpoints?

ERROR: no matching VPC Endpoint found
(error referring to data code block)
I am trying to retrieve multiple endpoints from data "aws_vpc_endpoint" resource. I created locals to retrieve service name for multiple endpoints that share the first few characters. Afterwards, the endpoints have unique characters to identify them individually.
I am wanting the data resource to loop through the data and retrieve each endpoint that shares those few characters. Then grab each endpoint id for "aws_route". FYI: The endpoints are being created from resource "aws_networkfirewall_firewall" The main thing to look at in this code snippet is locals, data, and the last line for resource "aws_route" How can I express in locals that the service_name does not end there and the rest of the string is unique to the endpoint without hard coding each service_name?
locals {
endpoints = {
service_name = "com.amazonaws.vpce.us-east-1.vpce-svc-"
}
}
data "aws_vpc_endpoint" "firewall-endpoints" {
for_each = local.endpoints
vpc_id = aws_vpc.vpc.id
service_name = each.value
#filter {
# name = "tag:AWSNetworkFirewallManaged"
# values = [true]
#}
}
resource "aws_route" "tgw_route" {
count = var.number_azs
route_table_id = aws_route_table.tgw_rt[count.index].id
destination_cidr_block = var.tgw_aws_route[0]
vpc_endpoint_id = data.aws_vpc_endpoint.firewall-endpoints["service_name"].id
}
I can't test this, but I think what you want to do is something like this:
resource "aws_route" "tgw_route" {
for_each = aws_networkfirewall_firewall.firewall_status.sync_states
route_table_id = aws_route_table.tgw_rt[???].id
destination_cidr_block = var.tgw_aws_route[0]
vpc_endpoint_id = each.value.attachment.endpoint_id
}
I'm not clear on the structure of the firewall_status output, so that may need to change slightly. The main question is how to get the appropriate route table ID per subnet. Can you access the outputs of the tgw_rt module in some way other than by index? Unfortunately, I have no experience with setting up an AWS firewall, just with Terraform, so I don't know how to solve this part of the puzzle.

How to add lifecycle rules to an S3 bucket using terraform?

I am using Terraform to create a bucket in S3 and I want to add "folders" and lifecycle rules to it.
I can create the bucket (using an "aws_s3_bucket" resource).
I can create the bucket and define my lifecycle rules within the same "aws_s3_bucket" resource, ie. at creation time.
I can add "folders" to the bucket (I know they aren't really folders, but they are presented to the client systems as if they were... :-) ), using an "aws_s3_bucket_object" resource, ie. after bucket creation.
All good...
But I want to be able to add lifecycle rules AFTER I've created the bucket, but I get an error telling me the bucket already exists. (Actually I want to be able to subsequently add folders and corresponding lifecycle rules as and when required.)
Now, I can add lifecycle rules to an existing bucket in the AWS GUI, so I know it is a reasonable thing to want to do.
But is there a way of doing it with Terraform?
Am I missing something?
resource "aws_s3_bucket" "bucket" {
bucket = "${replace(var.tags["Name"],"/_/","-")}"
region = "${var.aws_region}"
#tags = "${merge(var.tags, map("Name", "${var.tags["Name"]}"))}"
tags = "${merge(var.tags, map("Name", "${replace(var.tags["Name"],"/_/","-")}"))}"
}
resource "aws_s3_bucket" "bucket_quarterly" {
bucket = "${aws_s3_bucket.bucket.id}"
#region = "${var.aws_region}"
lifecycle_rule {
id = "quarterly_retention"
prefix = "quarterly/"
enabled = true
expiration {
days = 92
}
}
}
resource "aws_s3_bucket" "bucket_permanent" {
bucket = "${aws_s3_bucket.bucket.id}"
#region = "${var.aws_region}"
lifecycle_rule {
id = "permanent_retention"
enabled = true
prefix = "permanent/"
transition {
days = 1
storage_class = "GLACIER"
}
}
}
resource "aws_s3_bucket_object" "quarterly" {
bucket = "${aws_s3_bucket.bucket.id}"
#bucket = "${var.bucket_id}"
acl = "private"
key = "quarterly"
source = "/dev/null"
}
resource "aws_s3_bucket_object" "permanent" {
bucket = "${aws_s3_bucket.bucket.id}"
#bucket = "${var.bucket_id}"
acl = "private"
key = "permanent"
source = "/dev/null"
}
I expect to have a bucket with 2 lifecycle rules, but I get the following error:
Error: Error applying plan:
2 error(s) occurred:
* module.s3.aws_s3_bucket.bucket_quarterly: 1 error(s) occurred:
* aws_s3_bucket.bucket_quarterly: Error creating S3 bucket: BucketAlreadyOwnedByYou: Your previous request to create the named bucket succeeded and you already own it.
status code: 409, request id: EFE9C62B25341478, host id: hcsCNracNrpTJZ4QdU0AV2wNm/FqhYSEY4KieQ+zSHNsj6AUR69XvPF+0BiW4ZOpfgIoqwFoXkI=
* module.s3.aws_s3_bucket.bucket_permanent: 1 error(s) occurred:
* aws_s3_bucket.bucket_permanent: Error creating S3 bucket: BucketAlreadyOwnedByYou: Your previous request to create the named bucket succeeded and you already own it.
status code: 409, request id: 7DE1B1A36138A614, host id: 8jB6l7d6Hc6CZFgQSLQRMJg4wtvnrSL6Yp5R4RScq+GtuMW+6rkN39bcTUwQhzxeI7jRStgLXSc=
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
Lets first break down whats happening and how we can overcome this issue. Each time you define a resource "aws_s3_bucket", terraform will attempt to create a bucket with the parameters specified. If you want to attach a lifecycle policy to a bucket, do it where you define the bucket, e.g.:
resource "aws_s3_bucket" "quarterly" {
bucket = "quarterly_bucket_name"
#bucket = "${var.bucket_id}"
acl = "private"
lifecycle_rule {
id = "quarterly_retention"
prefix = "folder/"
enabled = true
expiration {
days = 92
}
}
}
resource "aws_s3_bucket" "permanent" {
bucket = "perm_bucket_name"
acl = "private"
lifecycle_rule {
id = "permanent_retention"
enabled = true
prefix = "permanent/"
transition {
days = 1
storage_class = "GLACIER"
}
}
}
A bucket can have multiple lifecycle_rule blocks on it.
If you want to define the lifecycle rules as external blocks, you can do it in this way:
// example of what the variable would look like:
variable "lifecycle_rules" {
type = "list"
default = []
}
// example of what the assignment would look like:
lifecycle_rules = [{
id = "cleanup"
prefix = ""
enabled = true
expiration = [{
days = 1
}]
}, {...}, {...} etc...]
// example what the usage would look like
resource "aws_s3_bucket" "quarterly" {
bucket = "quarterly_bucket_name"
#bucket = "${var.bucket_id}"
acl = "private"
source = "/dev/null"
lifecycle_rule = [ "${var.lifecycle_rules}" ]
}
Note: the implementation above of having an external lifecycle policy isn't really the best way to do it, but the only way. You pretty much trick terraform into accepting the list of maps, which happens to be the same type as lifecycle_rule, so it works. Ideally, Terraform should have it's own resource block for lifecycle rules, but it doesn't.
Edit: why have separate resource blocks when we now have dynamic blocks! Woohoo
As far as I am aware, you cannot make a lifecycle policy separately.
Someone raised a PR for a resource to be created to allow you to do so, but looks like it is still open: https://github.com/terraform-providers/terraform-provider-aws/issues/6188
As for your error, I believe the reason you're getting the error is because:
resource "aws_s3_bucket" "bucket"
Creates a bucket with a particular name.
resource "aws_s3_bucket" "bucket_quarterly"
References bucket = "${aws_s3_bucket.bucket.id}" and therefore tries to create a bucket with the same name as the previous resource (which cannot be done as names are unique).
resource "aws_s3_bucket" "bucket_permanent"
Similarly, this resource references bucket = "${aws_s3_bucket.bucket.id}" and therefore tries to create a bucket with the same name as the first resource (which cannot be done as names are unique).
You mentioned I expect to have a bucket with 2 lifecycle rules but in your above code you are creating 3 separate s3 buckets (one without a lifecycle, and 2 with a lifecycle) and two objects (folders) that are being placed into the s3 bucket without a lifecycle policy.
Thanks for the info (I like the idea of the list to separate the rules from the resource).
The issue was that I didn't appreciate that you could define lifecycle rules within the resource AND change them subsequently, so I was trying to figure out how to define them separately...
All that's required is to specify them in the resource and do terraform apply, then you can edit it and add/amend/remove lifecycle_rules items and just do terraform apply again to apply the changes.
source "aws_s3_bucket" "my_s3_bucket" {
bucket = local.s3_bucket_name
}
resource "aws_s3_bucket_acl" "my_s3_bucket_acl" {
bucket = aws_s3_bucket.my_s3_bucket.arn
lifecycle {
prevent_destroy = true
}
}
resource "aws_s3_bucket_versioning" "my_s3_bucket_versioning" {
bucket = aws_s3_bucket.my_s3_bucket.arn
versioning_configuration {
status = true
}
}
resource "aws_s3_bucket_server_side_encryption_configuration" "my_s3-bucket_encryption" {
bucket = aws_s3_bucket.my_s3_bucket.arn
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
resource "aws_s3_bucket_lifecycle_configuration" "my_s3_bucket_lifecycle_config" {
bucket = aws_s3_bucket.my_s3_bucket.arn
rule {
id = "dev_lifecycle_7_days"
status = true
abort_incomplete_multipart_upload {
days_after_initiation = 30
}
noncurrent_version_expiration {
noncurrent_days = 1
}
transition {
storage_class = "STANDARD_IA"
days = 30
}
expiration {
days = 30
}
}
}

In AWS API Gateway, using Terraform, how do you create sub-collection resources?

I'm trying to create sub-collection resources under an existing resource, using a GET method; something like:
/customers/{customerId}/accounts or /customers/{customerId}/accounts/{accountId}
Using Terraform, I already managed to create my customers and customers/{customerId} resource – and they both work.
But when I try and add a resource under customers/{customerId}, I get the ever elusive Missing Authentication Token error (which I've come to learn is mostly just that API Gateway can't find the resource/implementation/lambda), even though everything seems to be wired-up correctly.
Example code:
resource "aws_api_gateway_resource" "customers" {
rest_api_id = "${aws_api_gateway_rest_api.my-api.id}"
parent_id = "${aws_api_gateway_rest_api.my-api.root_resource_id}"
path_part = "customers"
}
resource "aws_api_gateway_resource" "single-customer" {
rest_api_id = "${aws_api_gateway_rest_api.my-api.id}"
parent_id = "${aws_api_gateway_resource.customers.id}"
path_part = "{customerId}"
}
resource "aws_api_gateway_resource" "customers-accounts" {
rest_api_id = "${aws_api_gateway_rest_api.my-api.id}"
parent_id = "${aws_api_gateway_resource.single-customer.id}"
path_part = "accounts"
}
//----
// GET
//----
resource "aws_api_gateway_method" "get-customers-accounts" {
rest_api_id = "${aws_api_gateway_rest_api.my-api.id}"
resource_id = "${aws_api_gateway_resource.customers-accounts.id}"
http_method = "GET"
authorization = "NONE"
}
resource "aws_api_gateway_integration" "get-customers-accounts-integration" {
content_handling = "CONVERT_TO_TEXT"
rest_api_id = "${aws_api_gateway_rest_api.my-api.id}"
resource_id = "${aws_api_gateway_resource.customers-accounts.id}"
http_method = "${aws_api_gateway_method.get-customers-accounts.http_method}"
type = "AWS_PROXY"
uri = "arn:aws:apigateway:${var.region}:lambda:path/2015-03-31/functions/arn:aws:lambda:${var.region}:${var.account-id}:function:${var.customers-lambda}/invocations"
integration_http_method = "POST"
}
Ideas? The lambda does exist, everything looks right in the console, and I did reselect the lambda function in the API Gateway console (there's a bug AWS cli where you'll get the Missing Authentication Error if you don't go in a manually reselect your lambda in the console).
UPDATES
As I mentioned, the Terraform code appears to work – no error there. The literal message I get from trying to access the endpoint is
{ message: "Missing Authentication Token" }
No logs are outputted. If I try and test the resource/endpoint via the API Gateway Test button, I get a Malformed Lambda Proxy Response – but that's misleading, as many valid, working endpoints generate that same message when being run from the Test button
{ message: "Missing Authentication Token" }
I have encountered the same when tried to access the lambda via endpoint but later a day it works though I didn't modify any changes.
Check your lambda output or it might be a api-endpoint resolver issue.
Note: Make sure you had deployed api after creating api_gateway else you will get the same message.