How to output attributes of resources created? - google-cloud-platform

I'm executing a GCP module to create a service account.
main.tf:
resource "google_service_account" "gsvc_account" {
account_id = "xxx"
display_name = ""
project = "proj-yyy"
}
output "account_id" {
value = "${google_service_account.gsvc_account.account_id}"
}
Once the account is created, a terraform.tfstate file is created containing all details of the account.
terraform.tfstate
{
"version": 4,
"terraform_version": "0.12.0",
"serial": 3,
"lineage": "aaaa-bbbb-cccc",
"outputs": {
"xxx": {
"value": "xxx",
"type": "string"
}
},
"resources": [
{
"module": "module.gsvc_tf",
"mode": "managed",
"type": "google_service_account",
"name": "gsvc_account",
"provider": "provider.google",
"instances": [
{
"schema_version": 0,
"attributes": {
"account_id": "xxx",
"display_name": "",
"email": "xxx#yyy.com",
"id": "projects/proj-yyy/serviceAccounts/xxx#yyy.com",
"name": "projects/proj-yyy/serviceAccounts/xxx#yyy.com",
"policy_data": null,
"project": "proj-xxx",
"unique_id": "10891885"
}
}
]
}
]
}
As you can see above, in the module, I'm outputing the account_id input variable. Is there a way to output the attributes viz. id, name etc. so that they can be accessed by another module? The attributes are computed after the resource is created.

From the docs for the google_service_account resource:
the following computed attributes are exported:
email - The e-mail address of the service account. This value should be referenced from any google_iam_policy data sources that would grant the service account privileges.
name - The fully-qualified name of the service account.
unique_id - The unique id of the service account.
You can declare outputs using these attributes in the same way as you declared your account_id output. For example:
output "id" {
value = "${google_service_account.gsvc_account.unique_id}"
}
output "email" {
value = "${google_service_account.gsvc_account.email}"
}
Re this: "so that they can be accessed by another module" ... if the "other module" uses the same state file then the above outputs are addressable using ...
${google_service_account.gsvc_account.account_id}
${google_service_account.gsvc_account.email}
etc
... i.e. you don't need outputs at all. So, I'm guessing that the "other module" is in a separate project / workspace / repo and hence is using a different state file. If so, then you would access these outputs via remote state. For example, you would declare a remote state data source to point at whatever state contains your outputs:
resource "terraform_remote_state" "the_other_state" {
backend = "..."
config {
...
}
}
And then refer to the outputs within that state like so:
${terraform_remote_state.the_other_state.output.account_id}
${terraform_remote_state.the_other_state.output.email}
etc

If your other module is ran against a different state file (eg your Terraform code is in a separate directory) then you might be better off using the google_service_account data source instead of trying to output the values of the resource to your state file and using the terraform_remote_state data source to fetch them.
The documentation for the google_service_account data source shows a nice example of how you would use this:
data "google_service_account" "myaccount" {
account_id = "myaccount-id"
}
resource "google_service_account_key" "mykey" {
service_account_id = "${data.google_service_account.myaccount.name}"
}
resource "kubernetes_secret" "google-application-credentials" {
metadata = {
name = "google-application-credentials"
}
data {
credentials.json = "${base64decode(google_service_account_key.mykey.private_key)}"
}
}
This avoids needing to configure your remote state data source and can be significantly simpler. In fact, this is the way I'd recommend accessing information about an existing resource in any case where the provider has a suitable data source. I'd even go so far as to recommend the external data source over the terraform_remote_state data source if there's another way to get at that information (eg through a cloud providers CLI) just because the terraform_remote_state data source is particularly clunky.

Related

List of Active Directory DNS servers IP addresses in an SSM document

I am converting my 0.11 code to 0.12. Most things seem to be working out well, but I am really lost on the SSM document.
In my 0.11 code, I had this code:
resource "aws_ssm_document" "ssm_document" {
name = "ssm_document_${terraform.workspace}${var.addomainsuffix}"
document_type = "Command"
content = <<DOC
{
"schemaVersion": "1.0",
"description": "Automatic Domain Join Configuration",
"runtimeConfig": {
"aws:domainJoin": {
"properties": {
"directoryId": "${aws_directory_service_directory.microsoftad-lab.id}",
"directoryName": "${aws_directory_service_directory.microsoftad-lab.name}",
"dnsIpAddresses": [
"${aws_directory_service_directory.microsoftad-lab.dns_ip_addresses[0]}",
"${aws_directory_service_directory.microsoftad-lab.dns_ip_addresses[1]}"
]
}
}
}
}
DOC
depends_on = ["aws_directory_service_directory.microsoftad-lab"]
}
This worked reasonably well. However, Terraform 0.12 does not accept this code, saying
This value does not have any indices.
I have been trying to look up different solutions on the web, but I am encountering countless issues with datatypes. For example, one of the solutions I have seen proposes this:
"dnsIpAddresses": [
"${sort(aws_directory_service_directory.oit-microsoftad-lab.dns_ip_addresses)[0]}",
"${sort(aws_directory_service_directory.oit-microsoftad-lab.dns_ip_addresses)[1]}",
]
}
and I am getting
InvalidDocumentContent: JSON not well-formed
which is kinda weird to me, since if I am looking into trace log, I seem to be getting relatively correct values:
{"Content":"{\n \"schemaVersion\": \"1.0\",\n \"description\": \"Automatic Domain Join Configuration\",\n \"runtimeConfig\": {\n \"aws:domainJoin\": {\n \"properties\": {\n \"directoryId\": \"d-9967245377\",\n \"directoryName\": \"012mig.lab\",\n \"dnsIpAddresses\": [\n \"10.0.0.227\",\n
\"10.0.7.103\",\n ]\n }\n }\n }\n}\n \n","DocumentFormat":"JSON","DocumentType":"Command","Name":"ssm_document_012mig.lab"}
I have tried concat and list to put the values together, but then I am getting the datatype errors. Right now, it looks like I am going around in loops here.
Does anyone have any direction to give me here?
Terraform 0.12 has stricter types than 0.11 and less automatic type coercion going on under the covers so here you're running into the fact that the output of the aws_directory_service_directory resource's dns_ip_addresses attribute isn't a list but a set:
"dns_ip_addresses": {
Type: schema.TypeSet,
Elem: &schema.Schema{Type: schema.TypeString},
Set: schema.HashString,
Computed: true,
},
Set's can't be indexed directly and instead must first be converted to a list explicitly in 0.12.
As an example:
variable "example_list" {
type = list(string)
default = [
"foo",
"bar",
]
}
output "list_first_element" {
value = var.example_list[0]
}
Running terraform apply on this will output the following:
Outputs:
list_first_element = foo
However if we use a set variable instead:
variable "example_set" {
type = set(string)
default = [
"foo",
"bar",
]
}
output "set_first_element" {
value = var.example_set[0]
}
Then attempting to run terraform apply will throw the following error:
Error: Invalid index
on main.tf line 22, in output "set_foo":
22: value = var.example_set[0]
This value does not have any indices.
If we convert the set variable into a list with tolist first then it works:
variable "example_set" {
type = set(string)
default = [
"foo",
"bar",
]
}
output "set_first_element" {
value = tolist(var.example_set)[0]
}
Outputs:
set_first_element = bar
Note that sets may have different ordering to what you may expect (in this case it is ordered alphabetically rather than as declared). In your case this isn't an issue but it's worth thinking about when indexing an expecting the elements to be in the order you declared them.
Another possible option here, instead of building the JSON output from the set or list of outputs, you could just directly encode the dns_ip_addresses attribute as JSON with the jsonencode function:
variable "example_set" {
type = set(string)
default = [
"foo",
"bar",
]
}
output "set_first_element" {
value = jsonencode(var.example_set)
}
Which outputs the following after running terraform apply:
Outputs:
set_first_element = ["bar","foo"]
So for your specific example we would want to do something like this:
resource "aws_ssm_document" "ssm_document" {
name = "ssm_document_${terraform.workspace}${var.addomainsuffix}"
document_type = "Command"
content = <<DOC
{
"schemaVersion": "1.0",
"description": "Automatic Domain Join Configuration",
"runtimeConfig": {
"aws:domainJoin": {
"properties": {
"directoryId": "${aws_directory_service_directory.microsoftad-lab.id}",
"directoryName": "${aws_directory_service_directory.microsoftad-lab.name}",
"dnsIpAddresses": ${jsonencode(aws_directory_service_directory.microsoftad-lab.dns_ip_addresses)}
}
}
}
}
DOC
}
Note that I also removed the unnecessary depends_on. If a resource has interpolation in from another resource then Terraform will automatically understand that the interpolated resource needs to be created before the one referencing it.
The resource dependencies documentation goes into this in more detail:
Most resource dependencies are handled automatically. Terraform
analyses any expressions within a resource block to find references to
other objects, and treats those references as implicit ordering
requirements when creating, updating, or destroying resources. Since
most resources with behavioral dependencies on other resources also
refer to those resources' data, it's usually not necessary to manually
specify dependencies between resources.
However, some dependencies cannot be recognized implicitly in
configuration. For example, if Terraform must manage access control
policies and take actions that require those policies to be present,
there is a hidden dependency between the access policy and a resource
whose creation depends on it. In these rare cases, the depends_on
meta-argument can explicitly specify a dependency.

CDK adds random parameters

So I have this function I'm trying to declare and it works and deploys just dandy unless you uncomment the logRetention setting. If logRetention is specified the cdk deploy operation
adds additional parameters to the stack. And, of course, this behavior is completely unexplained in the documentation.
https://docs.aws.amazon.com/cdk/api/latest/docs/aws-lambda-readme.html#log-group
SingletonFunction.Builder.create(this, "native-lambda-s3-fun")
.functionName(funcName)
.description("")
// .logRetention(RetentionDays.ONE_DAY)
.handler("app")
.timeout(Duration.seconds(300))
.runtime(Runtime.GO_1_X)
.uuid(UUID.randomUUID().toString())
.environment(new HashMap<String, String>(){{
put("FILE_KEY", "/file/key");
put("S3_BUCKET", junk.getBucketName());
}})
.code(Code.fromBucket(uploads, functionUploadKey(
"formation-examples",
"native-lambda-s3",
lambdaVersion.getValueAsString()
)))
.build();
"Parameters": {
"lambdaVersion": {
"Type": "String"
},
"AssetParametersceefd938ac7ea929077f2e2f4cf09b5034ebdd14799216b1281f4b28427da40aS3BucketB030C8A8": {
"Type": "String",
"Description": "S3 bucket for asset \"ceefd938ac7ea929077f2e2f4cf09b5034ebdd14799216b1281f4b28427da40a\""
},
"AssetParametersceefd938ac7ea929077f2e2f4cf09b5034ebdd14799216b1281f4b28427da40aS3VersionKey6A2AABD7": {
"Type": "String",
"Description": "S3 key for asset version \"ceefd938ac7ea929077f2e2f4cf09b5034ebdd14799216b1281f4b28427da40a\""
},
"AssetParametersceefd938ac7ea929077f2e2f4cf09b5034ebdd14799216b1281f4b28427da40aArtifactHashEDC522F0": {
"Type": "String",
"Description": "Artifact hash for asset \"ceefd938ac7ea929077f2e2f4cf09b5034ebdd14799216b1281f4b28427da40a\""
}
},
It's a bug. They're Working On Itâ„¢. So, rejoice - we can probably expect a fix sometime within the next decade.
I haven't tried it yet, but I'm guessing the workaround is to manipulate the low-level CfnLogGroup construct, since it has the authoritative retentionInDays property. The relevant high-level Log Group construct can probably be obtained from the Function via its logGroup property. Failing that, the LogGroup can be created from scratch (which will probably be a headache all on its own).
I also encountered the problem described above. From what I can tell, we are unable to specify a log group name and thus the log group name is predictable.
My solution was to simply create a LogGroup with the same name as my Lambda function with the /aws/lambda/ prefix.
Example:
var function = new Function(
this,
"Thing",
new FunctionProps
{
FunctionName = $"{Stack.Of(this).StackName}-Thing",
// ...
});
_ = new LogGroup(
this,
"ThingLogGroup",
new LogGroupProps
{
LogGroupName = $"/aws/lambda/{function.FunctionName}",
Retention = RetentionDays.ONE_MONTH,
});
This does not create unnecessary "AssetParameters..." CF template parameters like the inline option does.
Note: I'm using CDK version 1.111.0 and 1.86.0 with C#

Terraform load json object from AWS S3

I have a need to load data from a non public S3 bucket. Using this JSON I wanted be able to loop over lists within the terraform.
Example:
{
info: [
"10.0.0.0/24",
"10.1.1.0/24",
"10.2.2.0/24"
]
}
I can retrieve the JSON fine using the following:
data "aws_s3_bucket_object" "config" {
bucket = "our-bucket"
key = "global.json"
}
What I cannot do is utilize this as a map|list within terraform so that I can utilize this data. Any ideas?
After a good deal of trial and error I figured out a solution. Note that for this to work it appears the JSON source needs to be simple, by that I mean no nested objects like lists or maps.
{
foo1: "my foo1",
foo2: "my foo2",
foo3: "my foo3"
}
data "aws_s3_bucket_object" "config-json" {
bucket = "my-bucket"
key = "foo.json"
}
data "external" "config-map" {
program = ["echo", "${data.aws_s3_bucket_object.config-json.body}"]
}
output "foo" {
value = ["${values(data.external.config-map.result)}"]
}

Google Data Transfer API says completed but nothing has happened?

I'm using the Data Transfer API to programmatically transfer the files owned by user A to user B as part of our exit process.
I look up the email addresses for the two users so that I can retrieve their IDs. I also query the list of data transfer applications to get the application ID for "Drive and Docs".
I pass the built transfer definition to the API and get the following JSON back:
{
"kind": "admin#datatransfer#DataTransfer",
"etag": "\"RV_wOygBiIUZUtakV6Iq44-H_Gw/2M4Z2X_c8OpsyQOJxtWDmIHcYzo\"",
"id": "AKrEtIbF0aAg_4KK7-lHFOpRNPhcgAOWWDEK1HE0zD_EEY-bOPHXuj1rKNrEE-yHPYyjY8vzvZkK",
"oldOwnerUserId": "101496053770427062754",
"newOwnerUserId": "118268322014081744703",
"applicationDataTransfers": [
{
"applicationId": "55656082996",
"applicationTransferStatus": "pending"
}
],
"overallTransferStatusCode": "inProgress",
"requestTime": "2017-03-31T10:50:48.560Z"
}
I then query the transfers API to get an update on that transfer and get the following back:
{
'kind': 'admin#datatransfer#DataTransfer',
'requestTime': '2017-03-31T10:50:48.560Z',
'applicationDataTransfers': [
{
'applicationTransferStatus': 'completed',
'applicationId': '55656082996'
}
],
'newOwnerUserId': '118268322014081744703',
'oldOwnerUserId': '101496053770427062754',
'etag': '"RV_wOygBiIUZUtakV6Iq44-H_Gw/ZVnLgj3YLcsURTSzNm8m91tNeC0"',
'overallTransferStatusCode': 'completed',
'id': 'AKrEtIbF0aAg_4KK7-lHFOpRNPhcgAOWWDEK1HE0zD_EEY-bOPHXuj1rKNrEE-yHPYyjY8vzvZkK'
}
and, indeed, I get a confirmation email that the files have been transferred.
However, if I look in Google Drive for both users, the files have NOT changed ownership. For user B, a new directory has been created with the email address of user A, but it contains no files and user A still owns all of their files.
What have I done wrong or misunderstood?
Thanks.
I had faced the same issue, please provide "applicationTransferParams" with key value.
"applicationTransferParams": [
{
"key": string,
"value": [
string
]
}
]

Model directory expected to contain the 'export.meta' file

During the creation of a new version for a model after the selection of a bucket and folder I got this error from the Cloud Console.
{
"error": {
"code": 400,
"message": "Field: version.deployment_uri Error: The model directory gs://ml-codelab/v1-output/ is expected to contain the 'export.meta' file. Please make sure it exists and Cloud ML service account cloud-ml-service#xxx.iam.gserviceaccount.com has read access to it",
"status": "FAILED_PRECONDITION",
"details": [
{
"#type": "type.googleapis.com/google.rpc.BadRequest",
"fieldViolations": [
{
"field": "version.deployment_uri",
"description": "The model directory gs://ml-codelab/v1-output/ is expected to contain the 'export.meta' file. Please make sure it exists and Cloud ML service account cloud-ml-service#xxxx.iam.gserviceaccount.com has read access to it"
}
]
}
]
}
}
You need to create a meta graph when you export your model. You can do this using a saver e.g.
saver = tf.train.Saver()
saver.save(sess, os.path.join(FLAGS.output_dir, "export"))
Typically you save the session and graph separately because your serving graph can be different from the training graph.