I am getting following errors while describe the transit gateway peering attachments more than one and not getting the error while describe the transit gateway peering attachments with single peering attachment.
Code
data "aws_ec2_transit_gateway_attachment" "example" {
filter {
name = "state"
values = ["pendingAcceptance","available"]
}
filter {
name = "transit-gateway-id"
values = ["<Transit_gateway_id>"]
}
}
Error
Error: multiple EC2 Transit Gateway Attachments matched; use additional constraints to reduce matches to a single EC2 Transit Gateway Attachment
│
│ with data.aws_ec2_transit_gateway_attachment.example,
│ on main.tf line 5, in data "aws_ec2_transit_gateway_attachment" "example":
│ 5: data "aws_ec2_transit_gateway_attachment" "example" {
Apparently, as error tells, you have multiple attachments matching filter. You should make filter more precise. If you create tgw attachment in terraform just add it's id to filter. If you are looking only for pending attachments remove "available" from state's values.
Related
I have a global Aurora RDS cluster that takes automated snapshot everyday. My DB instance looks like this :-
new-test ( Global Database )
new-test-db ( Primary Cluster )
new-test-db-0 ( Writer Instance )
I have enabled automated snapshots for the db. What i am trying to achieve is to get the ARN for my snapshot using data resource. My ARN is something like this :-
arn:aws:rds:us-west-2:123456789101:cluster-snapshot:rds:new-test-db-2022-08-23-08-06
This is what my data resource looks like :-
data "aws_db_cluster_snapshot" "db" {
for_each = toset(var.rds_sources)
db_cluster_identifier = each.key
most_recent = true
}
where var.rds_sources is a list of strings. But when i try to access the arn using :-
data.aws_db_cluster_snapshot.db[*].db_cluster_snapshot_arn
I keep running into
Error: Unsupported attribute
│
│ on ../main.tf line 73, in resource "aws_iam_policy" "source_application":
│ 73: cluster_data_sources = jsonencode(data.aws_db_cluster_snapshot.db[*].db_cluster_snapshot_arn)
│
│ This object does not have an attribute named "db_cluster_snapshot_arn".
Which is weird since the attribute is laid out in the official docs. Thank you for the help.
This is my provider file :-
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.75"
}
archive = "~> 2.2.0"
}
required_version = "~> 1.2.6"
}
Since the data source is using for_each, the result will be a map of key value pairs. In terraform, there is a built-in function values [1] which can be used to fetch the values of a map. The return value is a list, so in order to get all the values for all the keys the splat operator is used [2]. Then, since the data source is returning multiple attributes and only one is required (namely db_cluster_snapshot_arn), the final expression needed is as follows:
jsonencode(values(data.aws_db_cluster_snapshot.db)[*].db_cluster_snapshot_arn)
[1] https://www.terraform.io/language/functions/values
[2] https://www.terraform.io/language/expressions/splat
I have created tgw using the official transit-gateway module and I am using the default route table, Iam also seeing that the module has created an additional route table which I am not able to remove via tf code.
module "transit-gateway" {
source = "terraform-aws-modules/transit-gateway/aws"
version = "1.4.0"
name = var.tgw
amazon_side_asn = 64532
enable_auto_accept_shared_attachments = true
vpc_attachments = {
vpc = {
vpc_id = module.vpc.vpc_id
subnet_ids = [module.vpc.private_subnets[0]]
dns_support = true
ipv6_support = false
transit_gateway_default_route_table_association = true
transit_gateway_default_route_table_propagation = true
}
}
ram_allow_external_principals = true
ram_principals = [123456789, 0987654321]
tags = {
Environment = "${var.env}"
Automated = "Terraform"
Owner = "${var.owner}"
Project = "${var.project}"
}
}
If you look at the module's source code here the only way to disable aws_ec2_transit_gateway_route_table route table creation is by setting create_tgw to false.
If you do this, you will disable entire TGW. So the answer to your question is that you can't remove them without removing entire TGW.
This is because, if you inspect the module source code, or similar modules (as CloudPosse's) you'll see that it's creating another transit gateway route table apart from the one created by the transit gateway itself.
Make a quick test by creating a transit gateway manually in the AWS Console.
In a nutshell: If the desired result is one transit gateway route table, then you'll have to develop a module by yourself, as these modules won't use the ID of the transit gateway route table created by the transit gateway, suspecting this is because it's not possible to manage the automatically created underlying resource.
After making some changes to end_point service like for example adding a new tag, network load balancer gets attempted to deleted first when running terraform apply and it doesn't succeed since NLB is associated with endpoint_service.
Endpoint service should be the first to get deleted so the network loadbalancer should get deleted later.
Is there a way to set which should get deleted first?
module.Tester_vpc.data.aws_instances.webservers: Refreshing state...
Error: Error deleting LB: ResourceInUse: Load balancer 'arn:aws:elasticloadbalancing:ap-south-1:123456:loadbalancer/net/myNLB/123456' cannot be deleted because it is currently associated with another service
status code: 400, request id: 25944b2d-49c7-1234-a32c-faeb6e2e7c7f
Here is the NLB resources.
resource "aws_vpc_endpoint_service" "nlb_service" {
count = var.create_lb ? 1 : 0
acceptance_required = false
network_load_balancer_arns = [aws_lb.myNLB[0].arn]
}
resource "aws_vpc_endpoint" "service_consumer" {
count = var.create_lb ? 1 : 0
vpc_id = data.aws_vpc.vpc_id.id
subnet_ids = data.aws_subnet_ids.private_subnet_ids.ids
security_group_ids = [data.aws_security_group.sG_myVPC.id]
vpc_endpoint_type = "Interface"
private_dns_enabled = false
service_name = aws_vpc_endpoint_service.nlb_service[0].service_name
tags = {
Name = "tester_service" # When adding a tag, NLB attemps get deleted first and fails.
}
}
Probably you have to do it manually. There are open issues on github for that problematic dependency which are still not resolved:
Dependency between subnets and LBs/VPC Endpoints not detected
endpoint service NLB change
I'm forwarding DNS requests sent to a list of internal domains (on premise) by using AWS Route53 resolver. By terraform, I want to share the rules I created to other accounts of the company, so I have the following:
# I create as much share endpoint as domain I have, so If I have 30 domains, I'll make 30 endpoint RAM:
resource "aws_ram_resource_share" "endpoint_share" {
count = length(var.forward_domain)
name = "route53-${var.forward_domain[count.index]}-share"
allow_external_principals = false
}
# Here I share every single endpoint with all the AWS ACcount we have
resource "aws_ram_principal_association" "endpoint_ram_principal" {
count = length(var.resource_share_accounts)
principal = var.resource_share_accounts[count.index]
resource_share_arn = {
for item in aws_ram_resource_share.endpoint_share[*]:
item.arn
}
}
The last block, calls the arn output of the first one which is a list.
Now, this last block doesn't work, I don't know how to use multiple counts, when I run this, I get the following error:
Error: Invalid 'for' expression
line 37: Key expression is required when building an object.
Any idea how to make this work?
Terraform version: 0.12.23
Use square brackets in resource_share_arn, like this:
resource_share_arn = [
for item in aws_ram_resource_share.endpoint_share[*]:
item.arn
]
I am trying to create multi-value SRV DNS entry in AWS route53 service via terraform. Values should be taken from instances tags. Due to the fact, that this is only one record, approach with count is not applicable.
The trick is, that I have 10 instances but they need to be filtered first by finding specific tags. Based on resultlist, SRV record should be created by using the Name tag assigned to each instance.
Any idea how to approach this issue?
Thanks in advance for any tip.
I did it like this:
resource "aws_instance" "myservers" {
count = 3
#.... other configuration....
}
resource "aws_route53_record" "srv" {
zone_id = aws_route53_zone.myzone.zone_id
name = "_service"
type = "SRV"
records = [for server in data.aws_instance.myservers : "0 10 5000 ${server.private_ip}."]
}
Terraform's for expression is being the key for the solution.
Regarding the SRV record in AWS Route 53, it should have a line per server and each line in the following form: priority weight port target (space is the delimiter). For the example above: 0 is the priority, 10 is the weight, 5000 is the port and the last one is the server IP (or name)