I want to execute the restore_to_point_in_time block conditionally depending on the flag restore set as true or false.
Question 1: how can i achieve this? I tried using for_each with dynamic, didnt work.
Question 2: on using restore_time argument I am getting error as unexpected argument.
resource "aws_rds_cluster" "rds_mysql" {
.. ....
restore_to_point_in_time {
source_cluster_identifier = var.source_cluster_identifier
restore_type = var.restore_type
restore_time = var.restore_time
}
}
dynamic blocks is the way to do it:
resource "aws_rds_cluster" "rds_mysql" {
.. ....
dynamic "restore_to_point_in_time" {
for_each = var.restore == true ? [1] : []
content {
source_cluster_identifier = var.source_cluster_identifier
restore_type = var.restore_type
restore_time = var.restore_time
}
}
}
Related
I am trying to build a reusable module that creates multiple S3 buckets. Based on a condition, some buckets may have lifecycle rules, others do not. I am using a for loop in the lifecycle rule resource and managed to do it but not on 100%.
My var:
variable "bucket_details" {
type = map(object({
bucket_name = string
enable_lifecycle = bool
glacier_ir_days = number
glacier_days = number
}))
}
How I go through the map on the lifecycle resource:
resource "aws_s3_bucket_lifecycle_configuration" "compliant_s3_bucket_lifecycle_rule" {
for_each = { for bucket, values in var.bucket_details : bucket => values if values.enable_lifecycle }
depends_on = [aws_s3_bucket_versioning.compliant_s3_bucket_versioning]
bucket = aws_s3_bucket.compliant_s3_bucket[each.key].bucket
rule {
id = "basic_config"
status = "Enabled"
abort_incomplete_multipart_upload {
days_after_initiation = 7
}
transition {
days = each.value["glacier_ir_days"]
storage_class = "GLACIER_IR"
}
transition {
days = each.value["glacier_days"]
storage_class = "GLACIER"
}
expiration {
days = 2555
}
noncurrent_version_transition {
noncurrent_days = each.value["glacier_ir_days"]
storage_class = "GLACIER_IR"
}
noncurrent_version_transition {
noncurrent_days = each.value["glacier_days"]
storage_class = "GLACIER"
}
noncurrent_version_expiration {
noncurrent_days = 2555
}
}
}
How I WOULD love to reference it in the root module:
module "s3_buckets" {
source = "./modules/aws-s3-compliance"
#
bucket_details = {
"fisrtbucketname" = {
bucket_name = "onlythefisrtbuckettesting"
enable_lifecycle = true
glacier_ir_days = 555
glacier_days = 888
}
"secondbuckdetname" = {
bucket_name = "onlythesecondbuckettesting"
enable_lifecycle = false
}
}
}
So when I reference it like that, it cannot validate, because I am not setting values for both glacier_ir_days & glacier_days - understandable.
My question is - is there a way to check if the enable_lifecycle is set to false, to not expect values for these?
Currently, as a workaround, I am just setting zeroes for those and since the resource is not created if enable_lifecycle is false, it does not matter, but I would love it to be cleaner.
Thank you in advance.
The forthcoming Terraform v1.3 release will include a new feature for declaring optional attributes in an object type constraint, with the option of declaring a default value to use when the attribute isn't set.
At the time I'm writing this the v1.3 release is still under development and so not available for general use, but I'm going to answer this with an example that should work with Terraform v1.3 once it's released. If you wish to try it in the meantime you can experiment with the most recent v1.3 alpha release which includes this feature, though of course I would not recommend using it in production until it's in a final release.
It seems that your glacier_ir_days and glacier_days attributes are, from a modeling perspective, attribtues that are required when the lifecycle is enabled and not required when lifecycle is disabled.
I would suggest modelling that by placing these attributes in a nested object called lifecycle and implementing it such that the lifecycle resource is enabled when that attribute is set, and disabled when it is left unset.
The declaration would therefore look like this:
variable "s3_buckets" {
type = map(object({
bucket_name = string
lifecycle = optional(object({
glacier_ir_days = number
glacier_days = number
}))
}))
}
When an attribute is marked as optional(...) like this, Terraform will allow omitting it in the calling module block and then will quietly set the attribute to null when it performs the type conversion to make the given value match the type constraint. This particular declaration doesn't have a default value, but it's also possible to pass a second argument in the optional(...) syntax which Terraform will then use instead of null as the placeholder value when the attribute isn't specified.
The calling module block would therefore look like this:
module "s3_buckets" {
source = "./modules/aws-s3-compliance"
#
bucket_details = {
"fisrtbucketname" = {
bucket_name = "onlythefisrtbuckettesting"
lifecycle = {
glacier_ir_days = 555
glacier_days = 888
}
}
"secondbuckdetname" = {
bucket_name = "onlythesecondbuckettesting"
}
}
}
Your resource block inside the module will remain similar to what you showed, but the if clause of the for expression will test if the lifecycle object is non-null instead:
resource "aws_s3_bucket_lifecycle_configuration" "compliant_s3_bucket_lifecycle_rule" {
for_each = {
for bucket, values in var.bucket_details : bucket => values
if values.lifecycle != null
}
# ...
}
Finally, the references to the attributes would be slightly different to traverse through the lifecycle object:
transition {
days = each.value.lifecycle.glacier_days
storage_class = "GLACIER"
}
I'm trying to create a module for Sagemaker endpoints. There's an optional object variable called async_inference_config. If you omit it, the endpoint being deployed is synchronous, but if you include it, the endpoint deployed is asynchronous. To satisfy both of these usecases, the async_inference_config needs to be an optional block.
I am unsure of how to make this block optional though.
Any guidance would be greatly appreciated. See example below of structure of the optional parameter.
Example:
resource "aws_sagemaker_endpoint_configuration" "sagemaker_endpoint_configuration" {
count = var.create ? 1 : 0
name = var.endpoint_configuration_name
production_variants {
instance_type = var.instance_type
initial_instance_count = var.instance_count
model_name = var.model_name
variant_name = var.variant_name
}
async_inference_config {
output_config {
s3_output_path = var.s3_output_path
}
client_config {
max_concurrent_invocations_per_instance = var.max_concurrent_invocations_per_instance
}
}
lifecycle {
create_before_destroy = true
ignore_changes = ["name"]
}
tags = var.tags
depends_on = [aws_sagemaker_model.sagemaker_model]
}
Update: What I tried based on the below suggestion, which seemed to work
dynamic "async_inference_config" {
for_each = var.async_inference_config == null ? [] : [true]
content {
output_config {
s3_output_path = lookup(var.async_inference_config, "s3_output_path", null)
}
client_config {
max_concurrent_invocations_per_instance = lookup(var.async_inference_config, "max_concurrent_invocations_per_instance", null)
}
}
}
You could use a dynamic block [1] in combination with for_each meta-argument [2]. It would look something like:
dynamic "async_inference_config" {
for_each = var.s3_output_path != null && var.max_concurrent_invocations_per_instance != null ? [1] : []
content {
output_config {
s3_output_path = var.s3_output_path
}
client_config {
max_concurrent_invocations_per_instance = var.max_concurrent_invocations_per_instance
}
}
}
Of course, you could come up with a different variable, say enable_async_inference_config (probalby of type bool) and base the for_each on that, e.g.:
dynamic "async_inference_config" {
for_each = var.enable_async_inference_config ? [1] : []
content {
output_config {
s3_output_path = var.s3_output_path
}
client_config {
max_concurrent_invocations_per_instance = var.max_concurrent_invocations_per_instance
}
}
}
[1] https://www.terraform.io/language/expressions/dynamic-blocks
[2] https://www.terraform.io/language/meta-arguments/for_each
I want to Add domain to listener rule in addition to paths. What arguments should I use for the same.
resource "aws_alb_listener_rule" "service" {
listener_arn = var.alb_listener_arn
action {
type = "forward"
target_group_arn = aws_alb_target_group.service.arn
}
condition {
path_pattern {
values = ["/login", "/logout"]
}
}
Thank you.
The domain name is specified using host_header:
Contains a single values item which is a list of host header patterns to match.
An example usage from the docs:
condition {
host_header {
values = ["my-service.*.terraform.io"]
}
}
Thanks. This worked.
condition {
path_pattern {
values = ["/login", "/logout"]
}
}
condition {
host_header {
values = ["my-service.*.terraform.io"]
}
}
I am building a very basic Systems Manager Association in TerraForm but I do not understand what the sourceInfo field is asking for. It requires a string but even simple strings like "test" cause it to reject the input.
resource "aws_ssm_association" "sslscanssm" {
name = "AWS-RunInspecChecks"
association_name = "test"
targets = {
key = "tag:os"
values = ["linux"]
}
parameters {
sourceType = "GitHub"
sourceInfo = "{"owner":"awslabs","repository":"amazon-ssm","path":"Compliance/InSpec/PortCheck","getOptions":"branch:master"}"
#^this line doesn't work
#sourceInfo = "test"
#^this line doesn't work either
}
}
Instead of escaping all of your strings you could also use the jsonencode function to turn a map into the JSON you want:
locals {
source_info = {
owner = "awslabs"
repository = "amazon-ssm"
path = "Compliance/InSpec/PortCheck"
getOptions = "branch:master"
}
}
resource "aws_ssm_association" "sslscanssm" {
name = "AWS-RunInspecChecks"
association_name = "test"
targets = {
key = "tag:os"
values = ["linux"]
}
parameters {
sourceType = "GitHub"
sourceInfo = "${jsonencode(local.source_info)}"
}
}
I wasn't aware sourceInfo expects parentheses and all inner double quotes to be escaped or it won't work.
resource "aws_ssm_association" "sslscanssm" {
name = "AWS-RunInspecChecks"
association_name = "test"
targets = {
key = "tag:os"
values = ["linux"]
}
parameters {
sourceType = "GitHub"
sourceInfo = "{\"owner\":\"awslabs\",\"repository\":\"amazon-ssm\",\"path\":\"Compliance/InSpec/PortCheck\",\"getOptions\":\"branch:master\"}"
}
}
There is a mistake in the code shared (no equal sign after targets but after parameters). The correct syntax of the resource is :
resource "aws_ssm_association" "sslscanssm" {
name = "AWS-RunInspecChecks"
association_name = "test"
targets {
key = "tag:os"
values = ["linux"]
}
parameters = {
sourceType = "GitHub"
sourceInfo = "${jsonencode(local.source_info)}"
}
}
I want to set multiple paths with aws_alb_listener_rule resource
but aws_alb_listener_rule resource should not be able to accept multiple values in the condition object?
Below is my resource. however, the error written in the title is out,
how can i fix that
resource "aws_alb_listener_rule" "admin_static" {
listener_arn = "${aws_alb_listener.web_http.arn}"
priority = 99
action {
type = "forward"
target_group_arn = "${aws_alb_target_group.ec2_web.arn}"
}
condition {
field = "host-header"
values = ["example.com"]
}
condition {
field = "path-pattern"
values = ["/admin/*"]
}
condition {
field = "path-pattern"
values = ["/static/*"]
}
}
I added the new source code and solved that Error modifying LB Listener Rule: ValidationError