Converting Set to List Yields Unpredictable Number of Elements - amazon-web-services

Terraform v0.13.7
provider registry.terraform.io/hashicorp/aws v2.70.0 / v3.59.0
provider registry.terraform.io/hashicorp/template v2.2.0
Hi,
I'm trying to move from version 2.70.0 to version 3.X of the AWS provider plug-in. That entails dealing with a change in the domain_validation_options attribute of aws_acm_certificate, which becomes a set rather than a list.
I have code managing a certificate with 3 SANs in addition to the main certificate name. With version 2.70.0 of the AWS provider plug-in, as expected, this produces a four-element list, as can be seen in this output from a terraform show -json planfile:
"domain_validation_options": [
{
"domain_name": "example.net",
"resource_record_name": "blah-1.example.net.",
"resource_record_type": "CNAME",
"resource_record_value": "blah-1.acm-validations.aws."
},
{
"domain_name": "*.a.example.net",
"resource_record_name": "blah-2.a.example.net.",
"resource_record_type": "CNAME",
"resource_record_value": "blah-2.acm-validations.aws."
},
{
"domain_name": "*.b.example.net",
"resource_record_name": "blah-3.b.example.net.",
"resource_record_type": "CNAME",
"resource_record_value": "blah-3.acm-validations.aws."
},
{
"domain_name": "*.c.example.net",
"resource_record_name": "blah-4.c.example.net.",
"resource_record_type": "CNAME",
"resource_record_value": "blah-4.acm-validations.aws."
}
],
Also as expected, I can address each one of these list elements by its index, e.g.:
aws_acm_certificate.sslcert.domain_validation_options[0]
When I install a 3.X version of the plug-in, however, a set is returned. I am trying to make things easy by converting it to a list with the tolist() function. That returns a lexicographically ordered list--which is expected.
"domain_validation_options": [
{
"domain_name": "*.a.example.net",
"resource_record_name": "blah-2.a.example.net.",
"resource_record_type": "CNAME",
"resource_record_value": "blah-2.acm-validations.aws."
},
{
"domain_name": "*.b.example.net",
"resource_record_name": "blah-3.b.example.net.",
"resource_record_type": "CNAME",
"resource_record_value": "blah-3.acm-validations.aws."
},
{
"domain_name": "*.c.example.net",
"resource_record_name": "blah-4.c.example.net.",
"resource_record_type": "CNAME",
"resource_record_value": "blah-4.acm-validations.aws."
},
{
"domain_name": "example.net",
"resource_record_name": "blah-1.example.net.",
"resource_record_type": "CNAME",
"resource_record_value": "blah-1.acm-validations.aws."
},
],
What is unexpected is Terraform reports that it is a two-element rather than a four-element list. When I try to access what appears to be the last element in the second terraform show -json planfile output quoted above ("domain_name": "example.net"), it gives me the following error:
on common/certificate/certificate.tf line 14, in locals:
14: vopt = tolist(aws_acm_certificate.sslcert.domain_validation_options)[3]
aws_acm_certificate.sslcert.domain_validation_options is set of object with 2 elements
Can anyone help me understand why this is happening? And is there a more reliable way to inspect variables than browsing the output of terraform show -json planfile?
Thanks!
UPDATE
Sorry, I probably should have shared the code that is generating my problem. Here is an anonymized version of it.
Calling module:
module "sslcert-example_net" {
source = "./common/certificate"
name = "example.net"
SAN = [
"*.a.example.net",
"*.b.example.net",
"*.c.example.net",
]
}
Child module:
resource "aws_acm_certificate" "sslcert" {
domain_name = var.name
subject_alternative_names = var.SAN
validation_method = "DNS"
lifecycle {
create_before_destroy = true
}
}
locals {
vopt = tolist(aws_acm_certificate.sslcert.domain_validation_options)[3]
}
resource "ns1_record" "dnsvalidation-sslcert" {
depends_on = [aws_acm_certificate.sslcert]
zone = var.name
domain = substr(
local.vopt["resource_record_name"],
0,
length(local.vopt["resource_record_name"]) - 1,
)
type = "CNAME"
answers {
answer = local.vopt["resource_record_value"]
}
}

Related

Multiple values in EMR Cluster Configuration template

Within my EMR module I have a template that is deployed for the cluster configuration, within this template are all the cluster configuration requirements for the given classification type as specified in the variable emr_cluster_applications e.g. Spark, Hadoop, Hive.
Visual:
emr_cluster_applications = ["Spark", "Hadoop", "Hive"]
emr_cluster_configurations = file("./filepath/to/template.json")
This set up works fine however moving forward I'm wondering if the template can be populated based on the values within the emr_cluster_applications variable.
For example in a seperate deployment, if ["Spark", "Hadoop"] were specified as opposed to all three, then the template file would only use the corresponding Spark and Hadoop configuration with Hive being ignored although still present in the file - is this possible?
Update:
Template file:
[
{
"Classification": "spark",
"Properties":{
"maximizeResourceAllocation": "false",
"spark.executor.memoryOverhead": "4G"
}
},
{
"Classification": "hive",
"Properties":{
"javax.jdo.option.ConnectionURL": XXXX
"javax.jdo.option.ConnectionDriverName": XXXX
"javax.jdo.option.ConnectionUserName": XXXX
"javax.jdo.option.ConnectionPassword": XXXX
}
},
{
"Classification": "hbase-site",
"Properties": {
"hbase.rootdir": "XXXXXXXXXX"
}
},
{
"Classification": "hbase",
"Properties":{
"hbase.emr.storageMode": "s3"
"hbase.emr.readreplica.emnabled": "true"
}
}
]
This is the best I could come up with and there might be better solutions, so take this with a grain of salt. I have problems with mapping the Hadoop to two different elements from the JSON, so I had to do some modifications to the variables in order to make it work. I strongly suggest doing any variable manipulation within a locals block in order to avoid clutter in the resources. The locals.tf example:
locals {
emr_template = [
{
"Classification" : "spark",
"Properties" : {
"maximizeResourceAllocation" : "false",
"spark.executor.memoryOverhead" : "4G"
}
},
{
"Classification" : "hive",
"Properties" : {
"javax.jdo.option.ConnectionURL" : "XXXX",
"javax.jdo.option.ConnectionDriverName" : "XXXX",
"javax.jdo.option.ConnectionUserName" : "XXXX",
"javax.jdo.option.ConnectionPassword" : "XXXX"
}
},
{
"Classification" : "hbase-site",
"Properties" : {
"hbase.rootdir" : "XXXXXXXXXX"
}
},
{
"Classification" : "hbase",
"Properties" : {
"hbase.emr.storageMode" : "s3",
"hbase.emr.readreplica.emnabled" : "true"
}
}
]
emr_template_mapping = { for template in local.emr_template : template.Classification => template }
hadoop_enabled = false
hadoop = local.hadoop_enabled ? ["hbase", "hbase-site"] : []
apps_enabled = ["spark", "hive"]
emr_cluster_applications = concat(local.apps_enabled, local.hadoop)
}
You can manipulate which apps will be added with two options:
If the Hadoop is enabled, that means hbase and hbase-site need to be added to the list of the allowed apps. If it is not enabled, then the value of the hadoop variable will be an empty list.
In the apps_enabled local variable you decide which ones you want to enable, i.e., spark, hive, none, or both.
Finally, in the emr_cluster_applications local variable you would use concat to concatenate the two lists into one.
Then, to create a JSON file locally, you could use the local_file option:
resource "local_file" "emr_template_file" {
content = jsonencode([for app in local.emr_cluster_applications :
local.emr_template_mapping["${app}"] if contains(keys(local.emr_template_mapping), "${app}")
]
)
filename = "${path.root}/template.json"
}
The local_file will output a JSON encoded file which can be used where you need it. I am pretty sure there are better ways to do it, so maybe someone else will see this and give a better answer.

Appflow upsert error : ID does not exist in the destination connector

Creating a appflow from S3 bucket to salesforce through CDK with upsert option.
Using existing connection to From S3 to Salesforce -
new appflow.CfnConnectorProfile(this, 'Connector',{
"connectionMode": "Public",
"connectorProfileName":"connection_name",
"connectorType":"Salesforce"
})
Destination flow Code -
new appflow.CfnFlow(this, 'Flow', {
destinationFlowConfigList: [
{
"connectorProfileName": "connection_name",
"connectorType": "Salesforce",
"destinationConnectorProperties": {
"salesforce": {
"errorHandlingConfig": {
"bucketName": "bucket-name",
"bucketPrefix": "subfolder",
},
"idFieldNames": [
"ID"
],
"object": "object_name",
"writeOperationType": "UPSERT"
}
}
}
],
..... other props ....
}
tasks: [
{
"taskType":"Filter",
"sourceFields": [
"ID",
"Some other fields",
...
],
"connectorOperator": {
"salesforce": "PROJECTION"
}
},
{
"taskType":"Map",
"sourceFields": [
"ID"
],
"taskProperties": [
{
"key":"SOURCE_DATA_TYPE",
"value":"Text"
},
{
"key":"DESTINATION_DATA_TYPE",
"value":"Text"
}
],
"destinationField": "ID",
"connectorOperator": {
"salesforce":"PROJECTION"
}
},
{
.... some other mapping fields.....
}
But the problem is - "Invalid request provided: AWS::AppFlow::FlowCreate Flow request failed: [ID does not exist in the destination conne ctor]
According to the error, how to fix the problem with the existing connector which results in ID does not exist in the destination connector
PS: ID is defined in the flow code. But still it is saying ID is not found.
I think your last connector operator should be:
"connectorOperator": {
"salesforce":"NO_OP"
}
instead of:
"connectorOperator": {
"salesforce":"PROJECTION"
}
since you are mapping the field ID into itself without any transformations whatsoever.

AWS Greengrass v2 - Lambda function access to local resources

Hope you're all doing as good as possible during those covid times.
Overview
I have a lambda function that runs on a raspberry device with Greengrass version 1. This lambda access my USB port that has an XBee on it (/dev/ttyUSB0) and sends this data to an MQTT on IoT Core and it is working for some months. It functions this way: My GGC receives 5 packages every 5 minutes from a remote station that has some sensors and after unpacking this data, it sends it as a JSON through MQTT.
I'm currently trying to update my GGC_v1 to GGC_v2 and am facing a problem when deploying it. I'm not able to access the local resource on version two when running the same lambda function, even though the recipe has access for reading and writing on the device.
On GGC_V1 is uses the configuration below:
Make this function long-lived and keep it running indefinitely
Use group default (currently: Greengrass container)
Use group default (currently: ggc_user/ggc_group)
Also added access to resource /dev/ttyUSB0.
Problem Log:
2021-07-13T20:07:22.890Z [INFO] (pool-2-thread-58) com.weatherStation.XBee: Finding mounted cgroups.. {serviceInstance=0, serviceName=com.weatherStation.XBee, currentState=STARTING}
2021-07-13T20:07:22.909Z [INFO] (Copier) com.weatherStation.XBee: Startup script exited. {exitCode=1, serviceInstance=0, serviceName=com.weatherStation.XBee, currentState=STARTING}
2021-07-13T20:07:22.915Z [INFO] (pool-2-thread-53) com.weatherStation.XBee: shell-runner-start. {scriptName=services.com.weatherStation.XBee.lifecycle.shutdown.script, serviceInstance=0, serviceName=com.weatherStation.XBee, currentState=BROKEN, command=["/greengrass/v2/packages/artifacts/aws.greengrass.LambdaLauncher/2.0.7/lambda-l..."]}
2021-07-13T20:07:23.102Z [WARN] (Copier) com.weatherStation.XBee: stderr. 2021/07/13 17:07:23 could not read process state file /greengrass/v2/work/com.weatherStation.XBee/work/worker/0/state.json: open /greengrass/v2/work/com.weatherStation.XBee/work/worker/0/state.json: no such file or directory. {scriptName=services.com.weatherStation.XBee.lifecycle.shutdown.script, serviceInstance=0, serviceName=com.weatherStation.XBee, currentState=BROKEN}
2021-07-13T20:07:23.220Z [ERROR] (pool-2-thread-60) com.weatherStation.XBee: error while removing dir {"path": "/greengrass/v2/work/com.weatherStation.XBee/work/worker/0", "errorString": "unlinkat /greengrass/v2/work/com.weatherStation.XBee/work/worker/0/overlays: device or resource busy"}. {serviceInstance=0, serviceName=com.weatherStation.XBee, currentState=BROKEN}
Recipe:
"RecipeFormatVersion": "2020-01-25",
"ComponentName": "com.weatherStation.XBee",
"ComponentVersion": "5.0.2",
"ComponentType": "aws.greengrass.lambda",
"ComponentDescription": "",
"ComponentPublisher": "AWS Lambda",
"ComponentSource": "arn:aws:lambda:region:account_id:function:Greengrass_WeatherStation",
"ComponentConfiguration": {
"DefaultConfiguration": {
"lambdaExecutionParameters": {
"EnvironmentVariables": {}
},
"containerParams": {
"memorySize": 16000,
"mountROSysfs": false,
"volumes": {},
"devices": {
"0": {
"path": "/dev/ttyUSB0",
"permission": "rw",
"addGroupOwner": true
}
}
},
"containerMode": "GreengrassContainer",
"timeoutInSeconds": 15,
"maxInstancesCount": 100,
"inputPayloadEncodingType": "json",
"maxQueueSize": 1000,
"pinned": true,
"maxIdleTimeInSeconds": 60,
"statusTimeoutInSeconds": 60,
"pubsubTopics": {
"0": {
"topic": "ggc/weather_station/data",
"type": "IOT_CORE"
}
}
}
},
"ComponentDependencies": {
"aws.greengrass.LambdaLauncher": {
"VersionRequirement": ">=2.0.0 <3.0.0",
"DependencyType": "HARD"
},
"aws.greengrass.TokenExchangeService": {
"VersionRequirement": ">=2.0.0 <3.0.0",
"DependencyType": "HARD"
},
"aws.greengrass.LambdaRuntimes": {
"VersionRequirement": ">=2.0.0 <3.0.0",
"DependencyType": "SOFT"
}
},
"Manifests": [
{
"Platform": {
"os": "linux",
"architecture": "arm"
},
"Lifecycle": {},
"Artifacts": [
{
"Uri": "greengrass:lambda-artifact.zip",
"Digest": "GVgaQlVuSYmfgbwoStd5dfB9WamdQgrhbE72s2fF04ysno=",
"Algorithm": "SHA-256",
"Unarchive": "ZIP",
"Permission": {
"Read": "OWNER",
"Execute": "NONE"
}
}
]
}
],
"Lifecycle": {
"startup": {
"requiresPrivilege": true,
"script": "{aws.greengrass.LambdaLauncher:artifacts:path}/lambda-launcher start"
},
"setenv": {
"AWS_GREENGRASS_LAMBDA_CONTAINER_MODE": "{configuration:/containerMode}",
"AWS_GREENGRASS_LAMBDA_ARN": "arn:aws:lambda:region:account_id:function:Greengrass_WeatherStation:5",
"AWS_GREENGRASS_LAMBDA_FUNCTION_HANDLER": "main.weather_handler",
"AWS_GREENGRASS_LAMBDA_ARTIFACT_PATH": "{artifacts:decompressedPath}/lambda-artifact",
"AWS_GREENGRASS_LAMBDA_CONTAINER_PARAMS": "{configuration:/containerParams}",
"AWS_GREENGRASS_LAMBDA_STATUS_TIMEOUT_SECONDS": "{configuration:/statusTimeoutInSeconds}",
"AWS_GREENGRASS_LAMBDA_ENCODING_TYPE": "{configuration:/inputPayloadEncodingType}",
"AWS_GREENGRASS_LAMBDA_PARAMS": "{configuration:/lambdaExecutionParameters}",
"AWS_GREENGRASS_LAMBDA_RUNTIME_PATH": "{aws.greengrass.LambdaRuntimes:artifacts:decompressedPath}/runtime/",
"AWS_GREENGRASS_LAMBDA_EXEC_ARGS": "[\"python3.7\",\"-u\",\"/runtime/python/lambda_runtime.py\",\"--handler=main.weather_handler\"]",
"AWS_GREENGRASS_LAMBDA_RUNTIME": "python3.7"
},
"shutdown": {
"requiresPrivilege": true,
"script": "{aws.greengrass.LambdaLauncher:artifacts:path}/lambda-launcher stop; {aws.greengrass.LambdaLauncher:artifacts:path}/lambda-launcher clean"
}
}
}

Terraform - Specifying multiple possible values for Variables

CloudFormation provides AllowedValues for Parameters which tells that the possible value of the parameter can be from this list. How can I achieve this with Terraform variables? The variable type of list does not provide this functionality. So, in case I want my variable to have value out of only two possible values, how can I achieve this with Terraform. CloudFormation script that I want to replicate is:
"ParameterName": {
"Description": "desc",
"Type": "String",
"Default": true,
"AllowedValues": [
"true",
"false"
]
}
I don't know of an official way, but there's an interesting technique described in a Terraform issue:
variable "values_list" {
description = "acceptable values"
type = "list"
default = ["true", "false"]
}
variable "somevar" {
description = "must be true or false"
}
resource "null_resource" "is_variable_value_valid" {
count = "${contains(var.values_list, var.somevar) == true ? 0 : 1}"
"ERROR: The somevar value can only be: true or false" = true
}
Update:
Terraform now offers custom validation rules in Terraform 0.13:
variable "somevar" {
type = string
description = "must be true or false"
validation {
condition = can(regex("^(true|false)$", var.somevar))
error_message = "Must be true or false."
}
}
Custom validation rules is definitely the way to go. If you want to keep things simple and check the provided value against a list of valid ones, you can use the following in your variables.tf config:
variable "environment" {
type = string
description = "Deployment environment"
validation {
condition = contains(["dev", "prod"], var.environment)
error_message = "Valid value is one of the following: dev, prod."
}
}
Variation on the above answer to use an array/list.
variable "appservice_sku" {
type = string
description = "AppService Plan SKU code"
default = "P1v3"
validation {
error_message = "Please use a valid AppService SKU."
condition = can(regex(join("", concat(["^("], [join("|", [
"B1", "B2", "B3", "D1", "F1",
"FREE", "I1", "I1v2", "I2", "I2v2",
"I3", "I3v2", "P1V2", "P1V3", "P2V2",
"P2V3", "P3V2", "P3V3", "PC2",
"PC3", "PC4", "S1", "S2", "S3",
"SHARED", "WS1", "WS2", "WS3"
])], [")$"])), var.appservice_sku))
}
}

How to pass a list to a nested stack parameter in AWS CloudFormation?

Im using nested stack to create ELB and application stacks...And i need to pass list of subnets to ELB and Application stack...
And the main json has the below code...
"Mappings":{
"params":{
"Subnets": {
"dev":[
"subnet-1”,
"subnet-2”
],
"test":[
"subnet-3”,
"subnet-4”,
"subnet-5”,
"subnet-6”
],
"prod":[
"subnet-7”,
"subnet-8”,
"subnet-9”
]
}
}
},
"Parameters":{
"Environment":{
"AllowedValues":[
"prod",
"preprod",
"dev"
],
"Default":"prod",
"Description":"What environment type is it (prod, preprod, test, dev)?",
"Type":"String"
}
},
Resources:{
"ELBStack": {
"Type": "AWS::CloudFormation::Stack",
"Properties": {
"TemplateURL": {
"Fn::Join":[
"",
[
"https://s3.amazonaws.com/",
"myS3bucket",
"/ELB.json"
]
]
},
"Parameters": {
"Environment":{"Ref":"Environment"},
"ELBSHORTNAME":{"Ref":"ELBSHORTNAME"},
"Subnets":{"Fn::FindInMap":[
"params",
"Subnets",
{
"Ref":"Environment"
}
]},
"S3Bucket":{"Ref":"S3Bucket"},
},
"TimeoutInMinutes": "60"
}
}
now when i run this json using lambda or cloudformation i get the below error under cloudformation Events Tab....
CREATE_FAILED AWS::CloudFormation::Stack ELBStack Value of property Parameters must be an object with String (or simple type) properties
using below lambda
import boto3
import time
date = time.strftime("%Y%m%d")
time = time.strftime("%H%M%S")
stackname = 'FulfillSNSELB'
client = boto3.client('cloudformation')
response = client.create_stack(
StackName= (stackname + '-' + date + '-' + time),
TemplateURL='https://s3.amazonaws.com/****/**/myapp.json',
Parameters=[
{
'ParameterKey': 'Environment',
'ParameterValue': 'dev',
'UsePreviousValue': False
}]
)
def lambda_handler(event, context):
return(response)
You can't pass a list to a nested stack. You have to pass a concatenation of items with the intrinsic function Join like this: !Join ["separator", [item1, item2, …]].
In the nested stack, the type of the parameter needs to be List<Type>.
Your JSON is not well-formed. Running your JSON through aws cloudformation validate-template (or even jsonlint.com) quickly reveals several basic syntax errors:
Resources:{ requires the key to be surrounded by quotes: "Resources": {
Some of your quotation marks are invalid 'smart-quotes' "subnet-1”, that need to be replaced with standard ASCII quotes: "subnet-1",
(This is the one your error message refers to) The "Properties" object in your "ELBStack" resource "S3Object: {"Ref: "S3Bucket"}," has a trailing comma after its last element that needs to be removed: "S3Object: {"Ref: "S3Bucket"}"