I have a project that has many SQS queues in AWS that we need to manage.
I need to import those queues into my terraform code, but since they're already being used, I can't destroy and recreate them.
Since we have many queues, we use a locals block instead of its resource block to define some of its arguments, such as name, delay_seconds and others. (this is because we don't want to add over 10 resource blocks to import the queues into them and have 100+ lines of code)
Below, example code that we use to create them:
provider "aws" {
region = "us-east-2"
}
locals {
sqs_queues = {
test-01 = {
name = "test-import-terraform-01"
delay_seconds = 30
}
test-02 = {
name = "test-import-terraform-02"
delay_seconds = 30
}
}
}
resource "aws_sqs_queue" "queue" {
for_each = local.sqs_queues
name = each.value.name
delay_seconds = each.value.delay_seconds
}
This in turn will create the following queues: test-import-terraform-01 and test-import-terraform-02 as usual.
Querying my statefile, i can see then defined as such:
aws_sqs_queue.queue["test-01"]
aws_sqs_queue.queue["test-02"]
Based on that, i would like to import two existing queues to my code: test-import-terraform-03 and test-import-terraform-04.
I thought about adding these two maps to my locals block:
test-03 = {
name = "test-import-terraform-03"
delay_seconds = 30
}
test-04 = {
name = "test-import-terraform-04"
delay_seconds = 30
}
But when I try to import them, I get the following error for either queues:
$ terraform import aws_sqs_queue.queue["test-03"] https://sqs.us-east-2.amazonaws.com/12345678910/test-import-terraform-03
zsh: no matches found: aws_sqs_queue.queue[test-03]
Is doing something like that possible?
Your problem is not to do with Terraform, but with shell expansion (note the error message comes from zsh).
Try quoting your shell arguments properly:
terraform import 'aws_sqs_queue.queue["test-03"]' 'https://sqs.us-east-2.amazonaws.com/12345678910/test-import-terraform-03'
Related
I am trying to create multiple packet mirror resources, using for_each
However in GCP packet mirror policy is restricted to only 5 subnets per policy
Now I am stumped how I can create multiple packet mirror policies referencing lets say from variable mirror_vpc_subnets below
variable "mirror_vpc_subnets" {
description = "Mirror VPC Subnets list to be mirrored."
type = list(string)
default = []
}
Now the objective is to get terraform to loop the huge list in my tfvars below and loop, but cherry picking the 1st 5 then assigning it to the 1st packet mirror resource I want to create called lets say packetmirror1
Then it looks again starting from the next list with appworkstream6-subnet and creates me packetmirror2
Then it looks again starting from the next list with appworkstream11-subnet and creates me packetmirror3
Hope this makes sense...
TFVARS here
mirror_vpc_subnets = [
"projects/gcp_project_name/regions/europe-west2/subnetworks/appworkstream1-subnet",
"projects/gcp_project_name/regions/europe-west2/subnetworks/appworkstream2-subnet",
"projects/gcp_project_name/regions/europe-west2/subnetworks/appworkstream3-subnet",
"projects/gcp_project_name/regions/europe-west2/subnetworks/appworkstream4-subnet",
"projects/gcp_project_name/regions/europe-west2/subnetworks/appworkstream5-subnet"
"projects/gcp_project_name/regions/europe-west2/subnetworks/appworkstream6-subnet",
"projects/gcp_project_name/regions/europe-west2/subnetworks/appworkstream7-subnet",
"projects/gcp_project_name/regions/europe-west2/subnetworks/appworkstream8-subnet",
"projects/gcp_project_name/regions/europe-west2/subnetworks/appworkstream9-subnet",
"projects/gcp_project_name/regions/europe-west2/subnetworks/appworkstream10-subnet"
"projects/gcp_project_name/regions/europe-west2/subnetworks/appworkstream11-subnet",
"projects/gcp_project_name/regions/europe-west2/subnetworks/appworkstream12-subnet",
"projects/gcp_project_name/regions/europe-west2/subnetworks/appworkstream13-subnet",
"projects/gcp_project_name/regions/europe-west2/subnetworks/appworkstream14-subnet",
"projects/gcp_project_name/regions/europe-west2/subnetworks/appworkstream15-subnet"
]
Please advise how this resource can be created in a loop also incrementing the packetmirror name on that creation loop
resource "google_compute_packet_mirroring" "main" {
name = var.packet_mirror_policy_name
project = var.gcp_project_id
region = var.region
network {
url = var.collector_mirror_network_selflink
}
collector_ilb {
url = var.forwarding_rule
}
mirrored_resources {
tags = var.mirrored_tags
dynamic "subnetworks" {
for_each = var.mirror_vpc_subnets
content {
url = subnetworks.value
}
}
dynamic "instances" {
for_each = var.mirror_vpc_instances
content {
url = instances.value
}
}
}
I have an example Cloudfront function:
resource "aws_cloudfront_function" "cool_function" {
name = "cool-function"
runtime = "cloudfront-js-1.0"
comment = "The cool function"
publish = true
code = <<EOT
function handler(event) {
var headers = event.request.headers;
if (
typeof headers.coolheader === "undefined" ||
headers.coolheader.value !== "That_is_cool_bro"
) {
console.log("That is not cool bro!")
}
return event.request;
}
EOT
}
When I create this function, Cloudwatch /aws/cloudfront/function/cool-function log group will be created automatically
But log group retention policy is Never Expire
And I can't see any parameters in terraform that allow to set retention days
So the question is:
is it possible to automatically import aws_cloudwatch_log_group every time when Cloudfront function creating and change retention_in_days for this resource?
Quite a few AWS services create their log groups implicitly on first use. To prevent that you need to explicitly create the group before the service has a chance to do it.
For that you need to define the aws_cloudwatch_log_group with the given name yourself, specify the correct retention and then create an explicit depends_on relation between the function and the log group to ensure the log group is created first. For migration purposes you now would need to import already created log groups into your terraform state.
resource "aws_cloudfront_function" "cool_function" {
name = "cool-function"
...
depends_on = [
aws_cloudwatch_log_group.logs
]
}
resource "aws_cloudwatch_log_group" "logs" {
name = "/aws/cloudfront/function/cool-function"
retention_in_days = 123
...
}
When I was just starting to use Terraform, I more or less naively declared resources individually, like this:
resource "aws_cloudwatch_log_group" "image1_log" {
name = "${var.image1}-log-group"
tags = module.tagging.tags
}
resource "aws_cloudwatch_log_group" "image2_log" {
name = "${var.image2}-log-group"
tags = module.tagging.tags
}
resource "aws_cloudwatch_log_stream" "image1_stream" {
name = "${var.image1}-log-stream"
log_group_name = aws_cloudwatch_log_group.image1_log.name
}
resource "aws_cloudwatch_log_stream" "image2_stream" {
name = "${var.image2}-log-stream"
log_group_name = aws_cloudwatch_log_group.image2_log.name
}
Then, 10-20 different log groups later, I realized this wasn't going to work well as infrastructure grew. I decided to define a variable list:
variable "image_names" {
type = list(string)
default = [
"image1",
"image2"
]
}
Then I replaced the resources using indices:
resource "aws_cloudwatch_log_group" "service-log-groups" {
name = "${element(var.image_names, count.index)}-log-group"
count = length(var.image_names)
tags = module.tagging.tags
}
resource "aws_cloudwatch_log_stream" "service-log-streams" {
name = "${element(var.image_names, count.index)}-log-stream"
log_group_name = aws_cloudwatch_log_group.service-log-groups[count.index].name
count = length(var.image_names)
}
The problem here is that when I run terraform apply, I get 4 resources to add, 4 resources to destroy. I tested this with an old log group, and saw that all my logs were wiped (obviously, since the log was destroyed).
The names and other attributes of the log groups/streams are identical- I'm simply refactoring the infrastructure code to be more maintainable. How can I maintain my existing log groups without deleting them yet still refactor my code to use lists?
You'll need to move the existing resources within the Terraform state.
Try running terraform show to get the strings under which the resources are stored, this will be something like [module.xyz.]aws_cloudwatch_log_group.image1_log ...
You can move it with terraform state mv [module.xyz.]aws_cloudwatch_log_group.image1_log '[module.xyz.]aws_cloudwatch_log_group.service-log-groups[0]'.
You can choose which index to assign to each resource by changing [0] accordingly.
Delete the old resource definition for each moved resource, as Terraform would otherwise try to create a new group/stream.
Try it with the first import and check with terraform plan if the resource was moved correctly...
Also check if you need to choose some index for the image_names list jsut to be sure, but I think that won't be necessary.
I have the following core module based off this official module:
module "sqs" {
source = "github.com/terraform-aws-modules/terraform-aws-sqs?ref=0d48cbdb6bf924a278d3f7fa326a2a1c864447e2"
name = "${var.site_env}-sqs-${var.service_name}"
}
I'd like to create two queues: xyz and xyz_dead. xyz sends its dead letter messages to xyz_dead.
module "xyz_queue" {
source = "../helpers/sqs"
service_name = "xyz"
redrive_policy = <<POLICY {
"deadLetterTargetArn" : "${data.TODO.TODO.arn}",
"maxReceiveCount" : 5
}
POLICY
site_env = "${var.site_env}"
}
module "xyz_dead_queue" {
source = "../helpers/sqs"
service_name = "xyz_dead"
site_env = "${var.site_env}"
}
How do I specify the deadLetterTargetArn dependency?
If I do:
data "aws_sqs_queue" "dead_queue" {
filter {
name = "tag:Name"
values = ["${var.site_env}-sqs-xyz_dead"]
}
}
and set deadLetterTargetArn to "${data.aws_sqs_queue.dead_queue.arn}", then I get this error:
Error: data.aws_sqs_queue.thumbnail_requests_queue_dead: "name":
required field is not set Error:
data.aws_sqs_queue.thumbnail_requests_queue_dead: : invalid or unknown
key: filter
The best way to do this is to use the outputted ARN from the module:
module "xyz_queue" {
source = "../helpers/sqs"
service_name = "xyz"
site_env = "${var.site_env}"
redrive_policy = <<POLICY
{
"deadLetterTargetArn" : "${module.xyz_dead_queue.this_sqs_queue_arn}",
"maxReceiveCount" : 5
}
POLICY
}
module "xyz_dead_queue" {
source = "../helpers/sqs"
service_name = "xyz_dead"
site_env = "${var.site_env}"
}
NB: I've also changed the indentation of your HEREDOC here because you normally need to remove the indentation with these.
This will pass the ARN of the SQS queue directly from the xyz_dead_queue module to the xyz_queue.
As for the errors you were getting, the aws_sqs_queue data source takes only a name argument, not a filter block like some of the other data sources do.
If you wanted to use the aws_sqs_queue data source then you'd just want to use:
data "aws_sqs_queue" "dead_queue" {
name = "${var.site_env}-sqs-${var.service_name}"
}
That said, if you are creating two things at the same time then you are going to have issues using a data source to refer to one of those things unless you create the first resource first. This is because data sources run before resources so if neither queue yet exists your data source would run and not find the dead letter queue and thus fail. If the dead letter queue did exist then it would be okay. In general though you're best off avoiding using data sources like this and only use them to refer to things being created in a separate terraform apply (or perhaps even created outside of Terraform).
You are also much better off simply passing the outputs of resources or modules to other resources/modules and allowing Terraform to correctly build a dependency tree for them as well.
I have 3 different version of an AMI, for 3 different nodes in a cluster.
data "aws_ami" "node1"
{
# Use the most recent AMI that matches the pattern below in 'values'.
most_recent = true
filter {
name = "name"
values = ["AMI_node1*"]
}
filter {
name = "tag:version"
values = ["${var.node1_version}"]
}
}
data "aws_ami" "node2"
{
# Use the most recent AMI that matches the pattern below in 'values'.
most_recent = true
filter {
name = "name"
values = ["AMI_node2*"]
}
filter {
name = "tag:version"
values = ["${var.node2_version}"]
}
}
data "aws_ami" "node3"
{
...
}
I would like to create 3 different Launch Configuration and Auto Scaling Group using each of the AMIs respectively.
resource "aws_launch_configuration" "node"
{
count = "${local.node_instance_count}"
# Name-prefix must be used otherwise terraform fails to perform updates to existing launch configurations due to
# a name conflict: LCs are immutable and the LC cannot be destroyed without destroying attached ASGs as well, which
# terraform will not do. Using name-prefix lets a new LC be created and swapped into the ASG.
name_prefix = "${var.environment_name}-node${count.index + 1}-"
image_id = "${data.aws_ami.node[count.index].image_id}"
instance_type = "${var.default_ec2_instance_type}"
...
}
However, I am not able use aws_ami.node1, aws_ami.node2, aws_ami.node3 using the cound.index the way I have shown above. I get the following error:
Error reading config for aws_launch_configuration[node]: parse error at 1:39: expected "}" but found "."
Is there another way I can do this in Terraform?
Indexing data sources isn't something that's doable; at the moment.
You're likely better off simply dropping the data sources you've defined and codifying the image IDs into a Terraform map variable.
variable "node_image_ids" {
type = "map"
default = {
"node1" = "1234434"
"node2" = "1233334"
"node3" = "1222434"
}
}
Then, consume it:
image_id = "${lookup(var.node_image_ids, concat("node", count.index), "some_default_image_id")}"
The downside of this is that you'll need to manually update the image id when images are upgraded.