Terraform v0.11 accessing remote state empty outputs - amazon-web-services

I have few terraform state file with empty attributes in there and few with some values
{
"version": 3,
"terraform_version": "0.11.14",
"modules": [
{
"path": [
"root"
],
"outputs": {},
"resources": {},
"depends_on": []
}]
}
I want to pick the default value say "Null" if i don't have the attributes and my remote config is
data "terraform_remote_state" "name" {
backend = "s3"
config {
.....
}
}
If i use "${lookup(data.terraform_remote_state.name.outputs, "attribute", "Null")}" it throws error 'data.terraform_remote_state.name' does not have attribute 'outputs'
Can someone guide me to solve this?

Related

Use env variables inside cloudformation templates

i have an amplify app with multiple branches.
I have added an custom cloudformation template with amplify add custom
It looks like:
{
"AWSTemplateFormatVersion": "2010-09-09",
"Parameters": { "env": { "Type": "String" } },
"Resources": {
"db": {
"Type": "AWS::Timestream::Database",
"Properties": { "DatabaseName": "dev_db" }
},
"timestreamtable": {
"DependsOn": "db",
"Type": "AWS::Timestream::Table",
"Properties": {
"DatabaseName": "dev_db",
"TableName": "avg_16_4h",
"MagneticStoreWriteProperties": { "EnableMagneticStoreWrites": true },
"RetentionProperties": {
"MemoryStoreRetentionPeriodInHours": "8640",
"MagneticStoreRetentionPeriodInDays": "1825"
}
}
}
},
"Outputs": {},
"Description": "{\"createdOn\":\"Windows\",\"createdBy\":\"Amplify\",\"createdWith\":\"8.3.1\",\"stackType\":\"custom-customCloudformation\",\"metadata\":{}}"
}
You can see there is a field called DatabaseName. In my amplify app i have written an env variable named TIMESTREAM_DB and i want to use it inside of this cloudformation file.
Is this possible or do i need to write it all by hand in it?
Templates cannot access arbitrary env vars. Instead, CloudFormation injects deploy-time values into a template with Parameters.
Amplify helpfully adds the env variable as a parameter. A la the Amplify docs, use the env value as the AWS::Timestream::Database name suffix:
"DatabaseName": "Fn::Join": [ "", [ "my-timestream-db-name-", { "Ref": "env" } ] ]
The AWS::Timestream::Table resource also requires a DatabaseName parameter. You could repeat the above, but it's more DRY to get the name via the Database's Ref:
"DatabaseName": { "Ref" : "db" }

Where/how do I define a NotificationConfig in an AWS SSM Automation document?

Say I have an SSM document like the below, and I want to be alerted when a run fails or doesn't finish for whatever reason:
{
"description": "Restores specified pg_dump backup to specified RDS/DB.",
"mainSteps": [
{
"action": "aws:runCommand",
"description": "Restores specified pg_dump backup to specified RDS/DB.",
"inputs": {
"DocumentName": "AWS-RunShellScript",
"Parameters": {
"commands": [
"blahblahblah"
],
"executionTimeout": "1800"
},
"Targets": [
{
"Key": "InstanceIds",
"Values": [
"i-xxxxxxxx"
]
}
]
},
"name": "DBRestorer",
"nextStep": "RunQueries"
},
Terraform documents show me that RunCommand documents should support a NotificationConfig where I can pass in my SNS topic ARN and declare what state transitions should trigger a message: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ssm_maintenance_window_task#notification_config
However, I can't find any Amazon docs that actually include the use of a notification configuration in the document itself (not just the maintenance window, which I have set up as automation so it doesn't support it at the window level), so I'm not sure if it belongs as a sub-parameter, or whether to define it with camel case or dash separation.
Try this
{
"description": "Restores specified pg_dump backup to specified RDS/DB.",
"mainSteps": [
{
"action": "aws:runCommand",
"description": "Restores specified pg_dump backup to specified RDS/DB.",
"inputs": {
"DocumentName": "AWS-RunShellScript",
"NotificationConfig": {
"NotificationArn": "<<Replace this with a SNS Topic Arn>>",
"NotificationEvents": ["All"],
"NotificationType": "Invocation"
},
"ServiceRoleArn": "<<Replace this with an IAM role Arn that has access to SNS>>",
"Parameters": {
"commands": [
"blahblahblah"
],
"executionTimeout": "1800"
},
"Targets": [
{
"Key": "InstanceIds",
"Values": [
"i-xxxxxxxx"
]
}
]
},
"name": "DBRestorer",
"nextStep": "RunQueries"
},
...
]
}
Related documentation:
https://docs.aws.amazon.com/systems-manager/latest/userguide/automation-action-runcommand.html
https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_NotificationConfig.html#systemsmanager-Type-NotificationConfig-NotificationType

Google Cloud Compute Python API Isn't Accepting My Startup Script

Here is my request body:
server = {
'name': name_gen.haikunate(),
'machineType': f"zones/{zone}/machineTypes/n1-standard-1",
'disks': [
{
'boot': True,
'autoDelete': True,
'initializeParams': {
'sourceImage': 'projects/ubuntu-os-cloud/global/images/ubuntu-1604-xenial-v20191204'
}
}
],
'networkInterfaces': [
{
'network': '/global/networks/default',
'accessConfigs': [
{'type': 'ONE_TO_ONE_NAT', 'name': 'external nat'}
]
}
],
'metadata': {
'items': [
{
'keys': 'startup-script',
'value': startup_script
}
]
}
When using this request body with the compute object to create a vm, it is giving me this error:
googleapiclient.errors.HttpError:
<HttpError 400 when requesting https://compute.googleapis.com/compute/v1/projects/focal-maker-240918/zones/us-east4-c/instances?alt=json
returned "Invalid value for field 'resource.metadata':
'{
"item": [
{
"value": "#!/bin/bash\n\napt-get update\n\nsleep 15\n\nclear\n\napt-get install squ...'. Metadata invalid keys:">'
How can I fix this error?
If we look at the documentation for setting metadata on a Compute Engine instance found here we see that the structure is:
"items": [
{
"key": string,
"value": string
}
],
if we compare that to your structure described in your post, we see you have coded keys instead of key as the field name. This could easily be the issue. To resolve the problem, change keys to key.

What does 'schema' refer to in AWS Database Migration Service (DMS) if source database is S3?

I'm trying to transfer a test csv file from S3 to DynamoDB table using AWS Database Migration Service. I'm new to aws so forgive me if I'm doing something completely wrong.
I've created and tested source & target endpoints with no problem. However, I've run into some task definition errors (I'm not sure why but my logs don't appear in CloudWatch).
For simplicity's sake, my test source S3 file has only one column: eventId. The path is as follows: s3://myBucket/testFolder/events/events.csv
This is a JSON external definition file:
{
"TableCount": "1",
"Tables": [
{
"TableName": "events",
"TablePath": "testFolder/events/",
"TableOwner": "testFolder",
"TableColumns": [
{
"ColumnName": "eventId",
"ColumnType": "STRING",
"ColumnNullable": "false",
"ColumnIsPk": "true",
"ColumnLength": "10",
}
],
"TableColumnsTotal": "1"
}
]
}
Here's my task definition:
"rules": [
{
"rule-type": "selection",
"rule-id": "1",
"rule-name": "1",
"object-locator": {
"schema-name": "testFolder",
"table-name": "events"
},
"rule-action": "include"
},
{
"rule-type": "object-mapping",
"rule-id": "2",
"rule-name": "2",
"rule-action": "map-record-to-record",
"object-locator": {
"schema-name": "testFolder",
"table-name": "tableName"
},
"target-table-name": "myTestDynamoDBTable",
"mapping-parameters": {
"partition-key-name": "eventId",
"attribute-mappings": [
{
"target-attribute-name": "eventId",
"attribute-type": "scalar",
"attribute-sub-type": "string",
"value": "${eventId}"
}
]
}
}
]
}
Every time, my task is errored. I'm particularly confused about schema as my source file is in S3 so I thought schema is not needed there? I found this line in AWS docs:
s3://mybucket/hr/employee. At load time, AWS DMS assumes that the source schema name is hr... -> So should I include some sort of schema file in the hr folder?
Apologies if this is wrong, I'd appreciate any advice. Thanks.

how to read second block in tfstate with terraform?

I have a remote state on s3 and i don't know how to access to the second block of the tfstate json file.
My tfstate looks like this:
{
"version": 3,
"terraform_version": "0.11.7",
"serial": 1,
"lineage": "79b7840d-5998-1ea8-2b63-ca49c289ec13",
"modules": [
{
"path": [
"root"
],
"outputs": {},
"resources": {},
"depends_on": []
},
{
"path": [
"root",
"vpc"
],
"outputs": {
"database_subnet_group": {
all my resources are listed here...
}
and I can access it this code:
data "terraform_remote_state" "network" {
backend = "s3"
config = {
bucket = "bakka-tfstate"
key = "global/network.tfstate"
region = "eu-west-1"
}
}
but the output
output "tfstate" {
value = data.terraform_remote_state.network.outputs
}
shows me nothing
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Outputs:
tfstate = {}
I believe it's because there's two json blocks in this tfstate and terraform read the first one which is empty, so i'm asking how to read the second one ?
Only root outputs are accessible through remote state. This limitation is documented here. It also suggests a solution - in the project that produces the state you're referring to, you will need to thread the output through to a root output.