No function named try when rendering template in terraform - templates

I am rendering a .json document containing a policy:
data "template_file" "my_role_policy" {
template = file("iam_role_policy_template.json")
vars = {
ACCESS_TO_SM = false
FOO = bar
}
}
Within the iam_role_policy_template.json, I have the following snippet
%{ if try(ACCESS_TO_SM, false) }
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
],
"Resource": "s3://my-bucket/my-path"
}
%{ endif }
This is because there are other .tf files using the same template that (for some reason) may not pass this variable.
The plan fails with the error
Error: failed to render : <template_file>:20,15-18: Call to unknown function; There is no function named "try".
I thought it was possible to use it in a template.

The hashicorp/template provider and its template_file data source have been obsolete since 2019 and so the set of available functions and language features in that provider is effectively frozen at whatever Terraform supported at that time. It's still available for installation only for backward-compatibility for those using very old Terraform modules.
The try function is considerably newer and so it isn't available in that provider and never will be. As recommended in the template_file documentation, you should migrate to using the templatefile function, which is a built-in part of the Terraform language and so always matches the features of whatever version of Terraform you are using.
You can replace your data "template_file" block with a local value whose definition is a call to the templatefile function:
locals {
role_policy = templatefile("${path.module}/iam_role_policy_template.json", {
ACCESS_TO_SM = false
FOO = "bar"
})
}
Elsewhere in your module, each place where you refer to data.template_file.my_role_policy.rendered you can refer to local.role_policy instead.
Once you've made this change, Terraform should accept your use of try inside the template.
Separately: in your example the try function call is not achieving anything, because the top-level variables like ACCESS_TO_SM are always either defined or raise a static reference error. You can't use try with direct access to top-level template variables, on to attributes and elements of collections.
For example, if you pass a map into your template then you can use try to handle the case where an expected map key isn't present:
templatefile(..., {
example_map = tomap({
"a" = 1
})
})
${ try(example.map["b"], 2) }
...but it is not effective to use try when its first argument is just a direct reference to a variable, because Terraform requires that you define all of the variables that the template uses and so the template would not be evaluated at all if you did not include ACCESS_TO_SM in the set of defined variables.

You can't use try with %{} directtive. You would have to use try before template:
data "template_file" "my_role_policy" {
template = file("iam_role_policy_template.json")
vars = {
ACCESS_TO_SM = try(SOME-EXPRESSION, false)
FOO = "bar"
}
}
then the template would be:
%{ if ACCESS_TO_SM == "true" }
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
],
"Resource": "s3://my-bucket/my-path"
}
%{ endif }

Related

Will the terraform fail if the data does not exist?

Will the terraform fail if a user in the data does not exist?
I need to specify a user in the nonproduction environment by the data block:
data "aws_iam_user" "labUser" {
user_name = "gitlab_user"
}
Then I use this user in giving the user permissions:
resource "aws_iam_role" "ApiAccessRole_abc" {
name = "${var.stack}-ApiAccessRole_abc"
tags = "${var.tags}"
assume_role_policy = <<EOF
{
"Version": "2019-11-29",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"AWS": [
"${aws_iam_user.labUser.arn}"
]
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
In the production environment this user does not exist. Would the terraform break if this user does not exist? What would be a good approach to use the same terraform in both environments?
In Terraform a data block like you showed here is both a mechanism to fetch data and also an assertion by the author (you) that a particular external object is expected to exist in order for this configuration to be applyable.
In your case then, the answer is to ensure that the assertion that the object exists only appears in situations where it should exist. The "big picture" answer to this is to review the Module Composition guide and consider whether this part of your module ought to be decomposed into a separate module if it isn't always a part of the module it's embedded in, but I'll also show a smaller solution that uses conditional expressions to get the behavior you wanted without any refactoring:
variable "lab_user" {
type = string
default = null
}
data "aws_iam_user" "lab_user" {
count = length(var.lab_user[*])
user_name = var.lab_user
}
resource "aws_iam_role" "api_access_role_abc" {
count = length(data.aws_iam_user.lab_user)
name = "${var.stack}-ApiAccessRole_abc"
tags = var.tags
assume_role_policy = jsonencode({
Version = "2019-11-29"
Statement = [
{
Sid = ""
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = {
AWS = [data.aws_iam_user.lab_user[count.index].arn]
}
},
]
})
}
There's a few different things in the above that I want to draw attention to:
I made the lab username an optional variable rather than a hard-coded value. You can than change the behavior between your environments by assigning a different value to that lab_user variable, or leaving it unset altogether for environments that don't need a "lab user".
In the data "aws_iam_user" I set count to length(var.lab_user[*]). The [*] operator here is asking Terraform to translate the possibly-null string variable var.lab_user into a list of either zero or one elements, and then using the length of that list to decide how many aws_iam_user queries to make. If var.lab_user is null then the length will be zero and so no queries will be made.
Finally, I set the count for the aws_iam_role resource to match the length of the aws_iam_user data result, so that in any situation where there's one user expected there will also be one role created.
If you reflect on the Module Composition guide and conclude that this lab user ought to be a separate concern in a separate module then you'd be able to remove this conditional complexity from the "gitlab user" module itself and instead have the calling module either call that module or not depending on whether such a user is needed for that environment. The effect would be the same, but the decision would be happening in a different part of the configuration and thus it would achieve a different separation of concerns. Which separation of concerns is most appropriate for your system is, in the end, a tradeoff you'll need to make for yourself based on your knowledge of the system and how you expect it might evolve in future.
As suggested in the comments it will fail.
One approach that I can suggest is to supply the username as a var that you pass externally from a file dev.tfvars and prod.tfvars and run terraform with:
terraform apply --var-file example.tfvars
Then in your data resource you can have a count or for_each to check whether the var has been populated or not (if var has not been passed, you can skip the data interpolation)
count = var.enable_gitlab_user ? 1 : 0
The AWS direct approach would be to switch from IAM user in the Principal to tag-based Condition or even Role chaining. You can take a look at this AWS blog post for some ideas. There are examples for both cases.

Creating a StringLike condition with Terraform

I am trying to generate some terraform for an aws IAM policy. The condition in the policy looks like this
"StringLike": {
"kms:EncryptionContext:aws:cloudtrail:arn": [
"arn:aws:cloudtrail:*:aws-account-id:trail/*"
]
I am looking at the documentation for aws_iam_policy_document: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document, but it's not clear to me as to how to write this in terraform. Any help would be greatly apprecaited. This is my attempt
condition {
test = "StringLike"
variable = "kms:EncryptionContext:aws:cloudtrail:arn"
values = [
"arn:aws:cloudtrail:*:aws-account-id:trail/*"
]
}
Hello Evan you logic is correct just to add :
Each document configuration may have one or more statement
data "aws_iam_policy_document" "example" {
statement {
actions = [
"*", *//specify your actions here*
]
resources = [
"*", *//specify your resources here*
]
condition {
test = "StringLike"
variable = "kms:EncryptionContext:aws:cloudtrail:arn"
values = [
"arn:aws:cloudtrail:*:aws-account-id:trail/*"
]
}
}
Each policy statement may have zero or more condition blocks, which each accept the following arguments:
test (Required) The name of the IAM condition operator to evaluate.
variable (Required) The name of a Context Variable to apply the condition to. Context variables may either be standard AWS variables starting with aws:, or service-specific variables prefixed with the service name.
values (Required) The values to evaluate the condition against. If multiple values are provided, the condition matches if at least one of them applies. (That is, the tests are combined with the "OR" boolean operation.)
When multiple condition blocks are provided, they must all evaluate to true for the policy statement to apply. (In other words, the conditions are combined with the "AND" boolean operation.)
Here's the REF from terraform
IN Addition to create the policy from the document you created you use it like this:
resource "aws_iam_policy" "example" {
policy = data.aws_iam_policy_document.example.json
}
Here's A ref from Hashicorp

List of Active Directory DNS servers IP addresses in an SSM document

I am converting my 0.11 code to 0.12. Most things seem to be working out well, but I am really lost on the SSM document.
In my 0.11 code, I had this code:
resource "aws_ssm_document" "ssm_document" {
name = "ssm_document_${terraform.workspace}${var.addomainsuffix}"
document_type = "Command"
content = <<DOC
{
"schemaVersion": "1.0",
"description": "Automatic Domain Join Configuration",
"runtimeConfig": {
"aws:domainJoin": {
"properties": {
"directoryId": "${aws_directory_service_directory.microsoftad-lab.id}",
"directoryName": "${aws_directory_service_directory.microsoftad-lab.name}",
"dnsIpAddresses": [
"${aws_directory_service_directory.microsoftad-lab.dns_ip_addresses[0]}",
"${aws_directory_service_directory.microsoftad-lab.dns_ip_addresses[1]}"
]
}
}
}
}
DOC
depends_on = ["aws_directory_service_directory.microsoftad-lab"]
}
This worked reasonably well. However, Terraform 0.12 does not accept this code, saying
This value does not have any indices.
I have been trying to look up different solutions on the web, but I am encountering countless issues with datatypes. For example, one of the solutions I have seen proposes this:
"dnsIpAddresses": [
"${sort(aws_directory_service_directory.oit-microsoftad-lab.dns_ip_addresses)[0]}",
"${sort(aws_directory_service_directory.oit-microsoftad-lab.dns_ip_addresses)[1]}",
]
}
and I am getting
InvalidDocumentContent: JSON not well-formed
which is kinda weird to me, since if I am looking into trace log, I seem to be getting relatively correct values:
{"Content":"{\n \"schemaVersion\": \"1.0\",\n \"description\": \"Automatic Domain Join Configuration\",\n \"runtimeConfig\": {\n \"aws:domainJoin\": {\n \"properties\": {\n \"directoryId\": \"d-9967245377\",\n \"directoryName\": \"012mig.lab\",\n \"dnsIpAddresses\": [\n \"10.0.0.227\",\n
\"10.0.7.103\",\n ]\n }\n }\n }\n}\n \n","DocumentFormat":"JSON","DocumentType":"Command","Name":"ssm_document_012mig.lab"}
I have tried concat and list to put the values together, but then I am getting the datatype errors. Right now, it looks like I am going around in loops here.
Does anyone have any direction to give me here?
Terraform 0.12 has stricter types than 0.11 and less automatic type coercion going on under the covers so here you're running into the fact that the output of the aws_directory_service_directory resource's dns_ip_addresses attribute isn't a list but a set:
"dns_ip_addresses": {
Type: schema.TypeSet,
Elem: &schema.Schema{Type: schema.TypeString},
Set: schema.HashString,
Computed: true,
},
Set's can't be indexed directly and instead must first be converted to a list explicitly in 0.12.
As an example:
variable "example_list" {
type = list(string)
default = [
"foo",
"bar",
]
}
output "list_first_element" {
value = var.example_list[0]
}
Running terraform apply on this will output the following:
Outputs:
list_first_element = foo
However if we use a set variable instead:
variable "example_set" {
type = set(string)
default = [
"foo",
"bar",
]
}
output "set_first_element" {
value = var.example_set[0]
}
Then attempting to run terraform apply will throw the following error:
Error: Invalid index
on main.tf line 22, in output "set_foo":
22: value = var.example_set[0]
This value does not have any indices.
If we convert the set variable into a list with tolist first then it works:
variable "example_set" {
type = set(string)
default = [
"foo",
"bar",
]
}
output "set_first_element" {
value = tolist(var.example_set)[0]
}
Outputs:
set_first_element = bar
Note that sets may have different ordering to what you may expect (in this case it is ordered alphabetically rather than as declared). In your case this isn't an issue but it's worth thinking about when indexing an expecting the elements to be in the order you declared them.
Another possible option here, instead of building the JSON output from the set or list of outputs, you could just directly encode the dns_ip_addresses attribute as JSON with the jsonencode function:
variable "example_set" {
type = set(string)
default = [
"foo",
"bar",
]
}
output "set_first_element" {
value = jsonencode(var.example_set)
}
Which outputs the following after running terraform apply:
Outputs:
set_first_element = ["bar","foo"]
So for your specific example we would want to do something like this:
resource "aws_ssm_document" "ssm_document" {
name = "ssm_document_${terraform.workspace}${var.addomainsuffix}"
document_type = "Command"
content = <<DOC
{
"schemaVersion": "1.0",
"description": "Automatic Domain Join Configuration",
"runtimeConfig": {
"aws:domainJoin": {
"properties": {
"directoryId": "${aws_directory_service_directory.microsoftad-lab.id}",
"directoryName": "${aws_directory_service_directory.microsoftad-lab.name}",
"dnsIpAddresses": ${jsonencode(aws_directory_service_directory.microsoftad-lab.dns_ip_addresses)}
}
}
}
}
DOC
}
Note that I also removed the unnecessary depends_on. If a resource has interpolation in from another resource then Terraform will automatically understand that the interpolated resource needs to be created before the one referencing it.
The resource dependencies documentation goes into this in more detail:
Most resource dependencies are handled automatically. Terraform
analyses any expressions within a resource block to find references to
other objects, and treats those references as implicit ordering
requirements when creating, updating, or destroying resources. Since
most resources with behavioral dependencies on other resources also
refer to those resources' data, it's usually not necessary to manually
specify dependencies between resources.
However, some dependencies cannot be recognized implicitly in
configuration. For example, if Terraform must manage access control
policies and take actions that require those policies to be present,
there is a hidden dependency between the access policy and a resource
whose creation depends on it. In these rare cases, the depends_on
meta-argument can explicitly specify a dependency.

How to instantiate contents of rendered template_file?

I have a template file that I use to on a list:
variable "users" {
type = "list"
default = [
"blackwidow",
"hulk",
"marvel",
]
}
// This will loop through the users list above and render out code for
// each item in the list.
data "template_file" "init" {
template = file("user_template.tpl")
count = length(var.users)
vars = {
username = var.users[count.index]
bucketid = aws_s3_bucket.myFTP_Bucket.id
}
}
The template file has multiple aws resources like
- "aws_transfer_user"
- "aws_s3_bucket_object"
- "aws_transfer_ssh_key"
etc... In fact it can have more stuff than just that. It also has some terraform variables in there too.
This data template works great in rendering out the contents of the template file, substituting in the names of my users.
But that's all terraform does.
Terraform doesn't instantiate the rendered content of the template file. It just merely keeps it as a string and keeps it in memory. Kind of like the C preprocessor doing substitution, but not 'including' the file.
Kind of frustrating. I'd like Terraform to instantiate the contents of my rendered template file. How do I do this?
The template_file data source (along with the templatefile function that has replaced it for Terraform 0.12) are for string templating, not for modular Terraform configuration.
To produce a set of different resource instances per item in a collection, we use resource for_each:
variable "users" {
type = set(string)
default = [
"blackwidow",
"hulk",
"marvel",
]
}
resource "aws_transfer_user" "example" {
for_each = var.users
# ...
}
resource "aws_transfer_user" "example" {
for_each = var.users
# ...
}
resource "aws_s3_bucket_object" "example" {
for_each = var.users
# ...
}
resource "aws_transfer_ssh_key" "example" {
for_each = aws_transfer_user.example
# ...
}
Inside each of those resource blocks you can use each.key to refer to each one of the usernames. Inside the resource "aws_transfer_ssh_key" "example" block, because I used aws_transfer_user.example as the repetition expression, you can also use each.value to access the attributes of the corresponding aws_transfer_user object. That for_each expression also serves to tell Terraform that aws_transfer_ssh_key.example depends on aws_transfer_user.example.

Get current Environment name

I have an AWS question: I have an application running on Beanstalk. I have two environments, XXX-LIVE and XXX-TEST.
I would like to know how I can get the Environment name using the SDK, since I want to point to my test database if the code is running on the XXX-TEST environment?
So far I have only found the .RetrieveEnvironmentInfo() method of the object AWSClientFactory.CreateAmazonElasticBeanstalkClient();
But this requires that you provide the Environment name/ID.
Can anyone help?
Here's how we do it for our application in ruby:
def self.beanstalk_env
begin
uuid = File.readlines('/sys/hypervisor/uuid', 'r')
if uuid
str = uuid.first.slice(0,3)
if str == 'ec2'
metadata_endpoint = 'http://169.254.169.254/latest/meta-data/'
dynamic_endpoint = 'http://169.254.169.254/latest/dynamic/'
instance_id = Net::HTTP.get( URI.parse( metadata_endpoint + 'instance-id' ) )
document = Net::HTTP.get( URI.parse( dynamic_endpoint + 'instance-identity/document') )
parsed_document = JSON.parse(document)
region = parsed_document['region']
ec2 = AWS::EC2::Client.new(region: region)
ec2.describe_instances({instance_ids:[instance_id]}).reservation_set[0].instances_set[0].tag_set.each do |tag|
if tag.key == 'elasticbeanstalk:environment-name'
return tag.value
end
end
end
end
rescue
end
'No_Env'
end
Your instance's IAM-policy will have to allow ec2:describe:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ec2:Describe*"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
You can add custom "environment-name" parameter to both environments. Set the value to the name of the environment or just specify "test" or "production".
If the database access URL is the only difference between the two, then set URL as a parameter and you will end up with identical code with no branches.
More details on customization can be found here: Customizing and Configuring AWS Elastic Beanstalk Environments