How to create AWS SSM Parameter from Terraform - amazon-web-services

I am trying to copy AWS SSM parameter(for cloudwatch) from one region to another. I have the json which is created as a String in one region.
I am trying to write a terraform script to create this ssm parameter in another region.
According to the terraform documentation, I need to do this
resource "aws_ssm_parameter" "foo" {
name = "foo"
type = "String"
value = "bar"
}
In my case value is a json. Is there a way to store the json in a file and pass this file as value to the above resource? I tried using jsonencode,
resource "aws_ssm_parameter" "my-cloudwatch" {
name = "my-cloudwatch"
type = "String"
value = jsonencode({my-json})
that did not work either. I am getting this error
Extra characters after interpolation expression I believe this is because the json has characters like quotes and colon.
Any idea?

I tested the following & this worked for me:
resource "aws_ssm_parameter" "my-cloudwatch" {
name = "my-cloudwatch"
type = "String"
#value = file("${path.module}/ssm-param.json")
value = jsonencode(file("${path.module}/files/ssm-param.json"))
}
./files/ssm-param.json content:
{
"Value": "Something"
}
and the parameter store value looks like this:
"{\n \"Value\": \"Something\"\n}"

I just faced this issue the $ in the CW config is causing the problem. Use $$
"Note: If you specify the template as a literal string instead of loading a file, the inline template must use double dollar signs (like $${hello}) to prevent Terraform from interpolating values from the configuration into the string. "
https://www.terraform.io/docs/configuration-0-11/interpolation.html
"metrics": {
"append_dimensions": {
"AutoScalingGroupName": "$${aws:AutoScalingGroupName}",
"ImageId": "$${aws:ImageId}",
"InstanceId": "$${aws:InstanceId}",
"InstanceType": "$${aws:InstanceType}"
},
I prefer Pauls aproach though.

You need to insert your json with escaped quotas, is a little trick in AWS, and you need to parse this when retrieve:
const value = JSON.parse(Value)
Example of insert:
"Value": "\"{\"flag\":\"market_store\",\"app\":\"ios\",\"enabled\":\"false\"}\"",

Related

Is it possible to insert a value into a file before it gets used in a terraform resource?

I'm deploying an Amazon Connect instance with an attached contact flow, following this documentation: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/connect_contact_flow
My contact flow is stored in a file, so I'm using the following:
resource "aws_connect_contact_flow" "general" {
instance_id = aws_connect_instance.dev.id
name = "General"
description = "General Flow routing customers to queues"
filename = "flows/contact_flow.json"
content_hash = filebase64sha256("flows/contact_flow.json")
}
However my contact flow requires me to specify the ARN of an AWS Lambda function in one particular section:
...
"parameters":[
{
"name":"FunctionArn",
"value":"<lambda function arn>",
"namespace":null
},
{
"name":"TimeLimit",
"value":"3"
}
],
</snip>
...
the value I would like to substitute into the json file at <lambda function arn> before the contact flow is created is accessible at
data.terraform_remote_state.remote.outputs.lambda_arn
Is there any way to achieve this? Or will I have to use the 'content = ' method in the documentation linked above to achieve what I need?
Thanks.
If you really want to use filename instead of content, you have to write the rendered template to some temporary file using local_file.
But using content with templatefile directly for that would probably be easier. For that you would have to convert your flows/contact_flow.json into a template format:
"parameters":[
{
"name":"FunctionArn",
"value":"$arn",
"namespace":null
},
{
"name":"TimeLimit",
"value":"3"
}
],
then, for example:
locals {
contact_flow = templatefile("flows/contact_flow.json", {
arn = data.terraform_remote_state.remote.outputs.lambda_arn
})
}
resource "aws_connect_contact_flow" "general" {
instance_id = aws_connect_instance.dev.id
name = "General"
description = "General Flow routing customers to queues"
content = local.contact_flow
}

Terraform - How to output input variable?

I have a custom terraform module which create an AWS EC2 instance, so it's relying on the aws provider.
This terraform custom module is used as a base to describes the instance i want to create, but I also need some other information that will be reused later.
For example, i want to define a description to the VM as an input variable, but i don't need to use it at all to create my vm with the aws provider.
I just want this input variable to be sent directly as an output so it can be re-used later once terraform has done its job.
ex
What I have as input variable
variable "description" {
type = string
description = "Description of the instance"
}
what I wanna put as output variable
output "description" {
value = module.ec2_instance.description
}
What my main module is doing
module "ec2_instance" {
source = "./modules/aws_ec2"
ami_id = var.ami_id
instance_name = var.hostname
disk_size = var.disk_size
create_disk = var.create_disk
availability_zone = var.availability_zone
disk_type = var.disk_type
// I don't need the description variable for the module to work, and I don't wanna do anything with it here, i need it later as output
}
I feel stupid because i searched the web for an answer and can't find anything to do that.
Can you help ?
Thanks
EDIT: Added example of code
If you have an input variable declared like this:
variable "description" {
type = string
}
...then you can return its value as an output value like this, in the same module where you declared it:
output "description" {
value = var.description
}

List of Active Directory DNS servers IP addresses in an SSM document

I am converting my 0.11 code to 0.12. Most things seem to be working out well, but I am really lost on the SSM document.
In my 0.11 code, I had this code:
resource "aws_ssm_document" "ssm_document" {
name = "ssm_document_${terraform.workspace}${var.addomainsuffix}"
document_type = "Command"
content = <<DOC
{
"schemaVersion": "1.0",
"description": "Automatic Domain Join Configuration",
"runtimeConfig": {
"aws:domainJoin": {
"properties": {
"directoryId": "${aws_directory_service_directory.microsoftad-lab.id}",
"directoryName": "${aws_directory_service_directory.microsoftad-lab.name}",
"dnsIpAddresses": [
"${aws_directory_service_directory.microsoftad-lab.dns_ip_addresses[0]}",
"${aws_directory_service_directory.microsoftad-lab.dns_ip_addresses[1]}"
]
}
}
}
}
DOC
depends_on = ["aws_directory_service_directory.microsoftad-lab"]
}
This worked reasonably well. However, Terraform 0.12 does not accept this code, saying
This value does not have any indices.
I have been trying to look up different solutions on the web, but I am encountering countless issues with datatypes. For example, one of the solutions I have seen proposes this:
"dnsIpAddresses": [
"${sort(aws_directory_service_directory.oit-microsoftad-lab.dns_ip_addresses)[0]}",
"${sort(aws_directory_service_directory.oit-microsoftad-lab.dns_ip_addresses)[1]}",
]
}
and I am getting
InvalidDocumentContent: JSON not well-formed
which is kinda weird to me, since if I am looking into trace log, I seem to be getting relatively correct values:
{"Content":"{\n \"schemaVersion\": \"1.0\",\n \"description\": \"Automatic Domain Join Configuration\",\n \"runtimeConfig\": {\n \"aws:domainJoin\": {\n \"properties\": {\n \"directoryId\": \"d-9967245377\",\n \"directoryName\": \"012mig.lab\",\n \"dnsIpAddresses\": [\n \"10.0.0.227\",\n
\"10.0.7.103\",\n ]\n }\n }\n }\n}\n \n","DocumentFormat":"JSON","DocumentType":"Command","Name":"ssm_document_012mig.lab"}
I have tried concat and list to put the values together, but then I am getting the datatype errors. Right now, it looks like I am going around in loops here.
Does anyone have any direction to give me here?
Terraform 0.12 has stricter types than 0.11 and less automatic type coercion going on under the covers so here you're running into the fact that the output of the aws_directory_service_directory resource's dns_ip_addresses attribute isn't a list but a set:
"dns_ip_addresses": {
Type: schema.TypeSet,
Elem: &schema.Schema{Type: schema.TypeString},
Set: schema.HashString,
Computed: true,
},
Set's can't be indexed directly and instead must first be converted to a list explicitly in 0.12.
As an example:
variable "example_list" {
type = list(string)
default = [
"foo",
"bar",
]
}
output "list_first_element" {
value = var.example_list[0]
}
Running terraform apply on this will output the following:
Outputs:
list_first_element = foo
However if we use a set variable instead:
variable "example_set" {
type = set(string)
default = [
"foo",
"bar",
]
}
output "set_first_element" {
value = var.example_set[0]
}
Then attempting to run terraform apply will throw the following error:
Error: Invalid index
on main.tf line 22, in output "set_foo":
22: value = var.example_set[0]
This value does not have any indices.
If we convert the set variable into a list with tolist first then it works:
variable "example_set" {
type = set(string)
default = [
"foo",
"bar",
]
}
output "set_first_element" {
value = tolist(var.example_set)[0]
}
Outputs:
set_first_element = bar
Note that sets may have different ordering to what you may expect (in this case it is ordered alphabetically rather than as declared). In your case this isn't an issue but it's worth thinking about when indexing an expecting the elements to be in the order you declared them.
Another possible option here, instead of building the JSON output from the set or list of outputs, you could just directly encode the dns_ip_addresses attribute as JSON with the jsonencode function:
variable "example_set" {
type = set(string)
default = [
"foo",
"bar",
]
}
output "set_first_element" {
value = jsonencode(var.example_set)
}
Which outputs the following after running terraform apply:
Outputs:
set_first_element = ["bar","foo"]
So for your specific example we would want to do something like this:
resource "aws_ssm_document" "ssm_document" {
name = "ssm_document_${terraform.workspace}${var.addomainsuffix}"
document_type = "Command"
content = <<DOC
{
"schemaVersion": "1.0",
"description": "Automatic Domain Join Configuration",
"runtimeConfig": {
"aws:domainJoin": {
"properties": {
"directoryId": "${aws_directory_service_directory.microsoftad-lab.id}",
"directoryName": "${aws_directory_service_directory.microsoftad-lab.name}",
"dnsIpAddresses": ${jsonencode(aws_directory_service_directory.microsoftad-lab.dns_ip_addresses)}
}
}
}
}
DOC
}
Note that I also removed the unnecessary depends_on. If a resource has interpolation in from another resource then Terraform will automatically understand that the interpolated resource needs to be created before the one referencing it.
The resource dependencies documentation goes into this in more detail:
Most resource dependencies are handled automatically. Terraform
analyses any expressions within a resource block to find references to
other objects, and treats those references as implicit ordering
requirements when creating, updating, or destroying resources. Since
most resources with behavioral dependencies on other resources also
refer to those resources' data, it's usually not necessary to manually
specify dependencies between resources.
However, some dependencies cannot be recognized implicitly in
configuration. For example, if Terraform must manage access control
policies and take actions that require those policies to be present,
there is a hidden dependency between the access policy and a resource
whose creation depends on it. In these rare cases, the depends_on
meta-argument can explicitly specify a dependency.

CDK adds random parameters

So I have this function I'm trying to declare and it works and deploys just dandy unless you uncomment the logRetention setting. If logRetention is specified the cdk deploy operation
adds additional parameters to the stack. And, of course, this behavior is completely unexplained in the documentation.
https://docs.aws.amazon.com/cdk/api/latest/docs/aws-lambda-readme.html#log-group
SingletonFunction.Builder.create(this, "native-lambda-s3-fun")
.functionName(funcName)
.description("")
// .logRetention(RetentionDays.ONE_DAY)
.handler("app")
.timeout(Duration.seconds(300))
.runtime(Runtime.GO_1_X)
.uuid(UUID.randomUUID().toString())
.environment(new HashMap<String, String>(){{
put("FILE_KEY", "/file/key");
put("S3_BUCKET", junk.getBucketName());
}})
.code(Code.fromBucket(uploads, functionUploadKey(
"formation-examples",
"native-lambda-s3",
lambdaVersion.getValueAsString()
)))
.build();
"Parameters": {
"lambdaVersion": {
"Type": "String"
},
"AssetParametersceefd938ac7ea929077f2e2f4cf09b5034ebdd14799216b1281f4b28427da40aS3BucketB030C8A8": {
"Type": "String",
"Description": "S3 bucket for asset \"ceefd938ac7ea929077f2e2f4cf09b5034ebdd14799216b1281f4b28427da40a\""
},
"AssetParametersceefd938ac7ea929077f2e2f4cf09b5034ebdd14799216b1281f4b28427da40aS3VersionKey6A2AABD7": {
"Type": "String",
"Description": "S3 key for asset version \"ceefd938ac7ea929077f2e2f4cf09b5034ebdd14799216b1281f4b28427da40a\""
},
"AssetParametersceefd938ac7ea929077f2e2f4cf09b5034ebdd14799216b1281f4b28427da40aArtifactHashEDC522F0": {
"Type": "String",
"Description": "Artifact hash for asset \"ceefd938ac7ea929077f2e2f4cf09b5034ebdd14799216b1281f4b28427da40a\""
}
},
It's a bug. They're Working On Itâ„¢. So, rejoice - we can probably expect a fix sometime within the next decade.
I haven't tried it yet, but I'm guessing the workaround is to manipulate the low-level CfnLogGroup construct, since it has the authoritative retentionInDays property. The relevant high-level Log Group construct can probably be obtained from the Function via its logGroup property. Failing that, the LogGroup can be created from scratch (which will probably be a headache all on its own).
I also encountered the problem described above. From what I can tell, we are unable to specify a log group name and thus the log group name is predictable.
My solution was to simply create a LogGroup with the same name as my Lambda function with the /aws/lambda/ prefix.
Example:
var function = new Function(
this,
"Thing",
new FunctionProps
{
FunctionName = $"{Stack.Of(this).StackName}-Thing",
// ...
});
_ = new LogGroup(
this,
"ThingLogGroup",
new LogGroupProps
{
LogGroupName = $"/aws/lambda/{function.FunctionName}",
Retention = RetentionDays.ONE_MONTH,
});
This does not create unnecessary "AssetParameters..." CF template parameters like the inline option does.
Note: I'm using CDK version 1.111.0 and 1.86.0 with C#

Amazon CLI, route 53, TXT error

I'm trying to create a TXT record in Route53 via the Amazon CLI for DNS-01 validation. Seems like I'm very close but possibly running into a CLI issue (or a formatting issue I don't see). As you can see, it's complaining about a value that should be in quotes, but is indeed in quotes already...
Command Line:
aws route53 change-resource-record-sets --hosted-zone-id ID_HERE --change-batch file://c:\dev\test1.json
JSON File:
{
"Changes": [
{
"Action": "UPSERT",
"ResourceRecordSet": {
"Name": "DOMAIN_NAME_HERE",
"Type": "TXT",
"TTL": 60,
"ResourceRecords": [
{
"Value": "test"
}
]
}
}
]
}
Error:
An error occurred (InvalidChangeBatch) when calling the ChangeResourceRecordSets operation: Invalid Resource Record: FATAL problem: InvalidCharacterString (Value should be enclosed in quotation marks) encountered with 'test'
Those quotes are the JSON quotes, and those are not the quotes they're looking for.
The JSON string "test" encodes the literal value test.
The JSON string "\"test\"" encodes the literal value "test".
(This is because in JSON, a literal " in a string is escaped with a leading \).
It sounds like they want actual, literal quotes included inside the value, so if you're building this JSON manually you probably want the latter: "Value": "\"test\"".
A JSON library should do this for you if you passed it the value with the leading and trailing " included.