How to create an eventarc trigger in terraform for GCS? - google-cloud-platform

I would like to create an eventarc trigger for GCS object creation. According to the Eventarc documentation, this should use the direct GCS trigger. I can create it like this, but I don't know where to put the bucket name:
resource "google_eventarc_trigger" "upload" {
name = "upload"
location = "europe-west1"
matching_criteria {
attribute = "type"
value = "google.cloud.storage.object.v1.finalized"
}
destination {
workflow = google_workflows_workflow.process_file.id
}
service_account = google_service_account.workflow.email
}
When I run this example, I get the following error:
Error: Error creating Trigger: googleapi: Error 400: The request was invalid: The request was invalid: missing required attribute "bucket" in trigger.event_filters

Reading the documentation didn't help, but after reading the Creating Eventarc triggers with Terraform
blog post multiple times I found the answer. The bucket can be provided as another block of matching_criteria like this:
resource "google_eventarc_trigger" "upload" {
name = "upload"
location = "europe-west1"
matching_criteria {
attribute = "type"
value = "google.cloud.storage.object.v1.finalized"
}
matching_criteria {
attribute = "bucket"
value = google_storage_bucket.uploads.name
}
destination {
workflow = google_workflows_workflow.process_file.id
}
service_account = google_service_account.workflow.email
}

Related

Triggering a google_cloudbuild_trigger from terraform to create a google_storage_bucket_object

I have the following setup:
A google_cloudbuild_trigger that runs on the latest github code and builds and uploads the build to a dataflow flex artifact location (on google storage)
A dataflex template job that depends on the artifact being present.
I want to configure terraform so that if the artifact is not present, then automatically trigger the google_cloudbuild_trigger and wait for it to complete. If the artifact is present, then just continue using it.
Is this even possible in terraform ?
Snippets of my terraform script:
The following is the cloudbuild trigger:
resource "google_cloudbuild_trigger" "build_pipeline" {
name = "build_pipeline"
github {
owner = "my-org"
name = "my-project"
push {
branch = "^my-branch$"
}
}
filename = "path/cloudbuild.yaml"
substitutions = {
_PROJECT_ID = var.google_project_id
}
}
The following is the dataflow flex template job:
resource "google_dataflow_flex_template_job" "dataflow_job" {
provider = google-beta
name = "dataflow_job"
container_spec_gcs_path = "${google_storage_bucket.project_store.url}/path/to/flex/template.json"
project = var.google_project_id
depends_on = [google_bigquery_table.tables]
parameters = { ... }
}
I have tried creating a simple "data" resource like:
data "google_storage_bucket_object" "picture" {
name = "path/to/flex/template.json"
bucket = google_storage_bucket.project_store.name
}
But I cannot figure out how to change this into something that triggers the google_cloudbuild_trigger.build_pipeline if the data resource doesn't exist.
Something like:
data "google_storage_bucket_object" "picture" {
name = "path/to/flex/template.json"
bucket = google_storage_bucket.project_store.name
if_does_not_exist_trigger = google_cloudbuild_trigger.build_pipeline
}

How to deploy API Gateway with template_file stored in s3 bucket?

Is it possible to setup template_file with yaml file stored on s3 bucket?
Is there any other solution to attatch external file to API Gateway (like in case of lambda function which can be build based on file stored on s3)?
Update:
I tried to combine api_gateway resource with s3_bucket_object as datasource but terraform probably do not see it. There is an information that there are no changes.
data "aws_s3_bucket_object" "open_api" {
bucket = aws_s3_bucket.lambda_functions_bucket.bucket
key = "openapi-${var.current_api_version}.yaml"
}
resource "aws_api_gateway_rest_api" "default" {
name = "main-gateway"
body = data.aws_s3_bucket_object.open_api.body
endpoint_configuration {
types = ["REGIONAL"]
}
}
I tried also achieve it by using template_file
data "aws_s3_bucket_object" "open_api" {
bucket = aws_s3_bucket.lambda_functions_bucket.bucket
key = "openapi-${var.current_api_version}.yaml"
}
data "template_file" "open_api" {
template = data.aws_s3_bucket_object.open_api.body
vars = {
lambda_invocation_arn_user_post = aws_lambda_function.user_post.invoke_arn
lambda_invocation_arn_webhook_post = aws_lambda_function.webhook_post.invoke_arn
}
}
resource "aws_api_gateway_rest_api" "default" {
name = "main-gateway"
body = data.template_file.open_api.rendered
endpoint_configuration {
types = ["REGIONAL"]
}
}
but result is the same.
In case of REST API Gateway, you can try combining body parameter of aws_api_gateway_rest_api and aws_s3_bucket_object as a data source.
In case of HTTP API Gateway, you can try combining body parameter of aws_apigatewayv2_api and aws_s3_bucket_object as a data source.
Edit:
From terraform docs for aws_s3_bucket_object: The content of an object (body field) is available only for objects which have a human-readable Content-Type (text/* and application/json). The Content-Type header for YAML files seems to be unclear, but in this case using an application/* Content-Type for YAML would result in terraform ignoring the file's contents.
Issue was in YAML file, its look like terraform is not supporting it. It is neccesary to use JSON format.
data "aws_s3_bucket_object" "open_api" {
bucket = aws_s3_bucket.lambda_functions_bucket.bucket
key = "openapi-${var.current_api_version}.json"
}
data "template_file" "open_api" {
template = data.aws_s3_bucket_object.open_api.body
vars = {
lambda_invocation_arn_user_post = aws_lambda_function.user_post.invoke_arn
lambda_invocation_arn_webhook_post = aws_lambda_function.webhook_post.invoke_arn
}
}
resource "aws_api_gateway_rest_api" "default" {
name = "main-gateway"
body = data.template_file.open_api.rendered
endpoint_configuration {
types = ["REGIONAL"]
}
}

Dependency between pubsub topic and subscription using terraform script

I am using one terraform script to create a pub sub topic and subscription. If the subscription needs to subscribes from the topic created by the same script, is there a way to create a dependency such that terraform attempts to create the pub/sub subscription only after the topic is created?
My main file looks like this :
version = ""
project = var.project_id
region = var.region
zone = var.zone
}
# module "Dataflow" {
#source = "../modules/cloud-dataflow"
#}
module "PubSubTopic" {
source = "../modules/pubsub_topic"
}
#module "PubSubSubscription" {
# source = "../modules/pubsub_subscription"
#}
#module "CloudFunction" {
# source = "../modules/cloud-function"
#}
Terraform will attempt to create the resources following the proper order but to answer your question and what your looking for is modules dependency "depends_on".
For example, subscription module will be created only once topic resource has been already created. That way you should add the depends_on on the subscription module.
Example:
resource "aws_iam_policy_attachment" "example" {
name = "example"
roles = [aws_iam_role.example.name]
policy_arn = aws_iam_policy.example.arn
}
module "uses-role" {
# ...
depends_on = [aws_iam_policy_attachment.example]
}
Official documentation: https://www.terraform.io/docs/language/meta-arguments/depends_on.html
You can create a simple pubsub topic and a subscription with this snippet (just add the .json for a service account with enough privileges) on your filesystem:
provider "google" {
credentials = "${file("account.json")}" # Or use GOOGLE_APPLICATION_CREDENTIALS
project = "__your_project_id__"
region = "europe-west4" # Amsterdam
}
resource "google_pubsub_topic" "incoming_data" {
name = "incoming-data"
}
resource "google_pubsub_subscription" "incoming_subs" {
name = "Subscription_for_incoming_data"
topic = google_pubsub_topic.incoming_data.name
# Time since Pubsub receives a message to deletion.
expiration_policy {
ttl = "300000s"
}
# Time from client reception to ACK
message_retention_duration = "1200s"
retain_acked_messages = false
enable_message_ordering = false
}
To link a subscription with a topic in terraform, you just need to link it with:
topic = google_pubsub_topic.TERRAFORM_TOPIC.name
Be carefull with Google requirements for topic and subscription identifiers. If they're not valid, terraform plan will pass, but you'll get an Error 400 : You have passed an invalid argument to the service

How to make gcp cloud function public using Terraform

I will start by saying I am very new to both GCP and Terraform, so I hope there is a simple answer that I have just overlooked.
I am trying to create a GCP cloud function and then make it public using Terraform. I am able to create the function but not make it public, despite closely following the documentation's example: https://www.terraform.io/docs/providers/google/r/cloudfunctions_function.html
I receive the error "googleapi: Error 403: Permission 'cloudfunctions.functions.setIamPolicy' denied on resource ... (or resource may not exist)" when the google_cloudfunctions_function_iam_member resource is reached.
How can I make this function public? Does it have something to do with the account/api key I am using for credentials to create all these resources?
Thanks in advance.
my main.tf file:
provider "google" {
project = "my-project"
credentials = "key.json" #compute engine default service account api key
region = "us-central1"
}
terraform {
backend "gcs" {
bucket = "manually-created-bucket"
prefix = "terraform/state"
credentials = "key.json"
}
}
# create the storage bucket for our scripts
resource "google_storage_bucket" "source_code" {
name = "test-bucket-lh05111992"
location = "us-central1"
force_destroy = true
}
# zip up function source code
data "archive_file" "my_function_script_zip" {
type = "zip"
source_dir = "../source/scripts/my-function-script"
output_path = "../source/scripts/my-function-script.zip"
}
# add function source code to storage
resource "google_storage_bucket_object" "my_function_script_zip" {
name = "index.zip"
bucket = google_storage_bucket.source_code.name
source = "../source/scripts/my-function-script.zip"
}
#create the cloudfunction
resource "google_cloudfunctions_function" "function" {
name = "send_my_function_script"
description = "This function is called in GTM. It sends a users' google analytics id to BigQuery."
runtime = "nodejs10"
available_memory_mb = 128
source_archive_bucket = google_storage_bucket.source_code.name
source_archive_object = google_storage_bucket_object.my_function_script_zip.name
trigger_http = true
entry_point = "handleRequest"
}
# IAM entry for all users to invoke the function
resource "google_cloudfunctions_function_iam_member" "invoker" {
project = google_cloudfunctions_function.function.project
region = "us-central1"
cloud_function = google_cloudfunctions_function.function.name
role = "roles/cloudfunctions.invoker"
member = "allUsers"
}
It seems the only problem with that example from the terraform site are the " Cloud Functions IAM resources" which have been modified since Nov 2019. Now you have to specify these resources as explained here. Now for your user case (public cloud function) I'd recommend you to follow this configuration and just change the "members" attribute to "allUsers" so it'd be something like this
resource "google_cloudfunctions_function_iam_binding" "binding" {
project = google_cloudfunctions_function.function.project
region = google_cloudfunctions_function.function.region
cloud_function = google_cloudfunctions_function.function.name
role = "roles/cloudfunctions.invoker"
members = [
"allUsers",
]
}
Finally, you can give it a test and modify the functions you've already created here at the #Try this API" right panel and enter the proper resource and request body like this (make sure to enter the "resource" parameter correcly):
{
"policy": {
"bindings": [
{
"members": [
"allUsers"
],
"role": "roles/cloudfunctions.invoker"
}
]
}
}
In addition to adjusting the IAM roles how #chinoche suggested, I also discovered that I needed to modify the service account I was using to give it poject owner permissions. (I guess the default one I was using didn't have this). I updated my key.json and it finally worked.

How to export all logs from stackdriver into big query through terraform

I'll preface this by saying I am very new to GCP, stackdriver and big query.
I'm attempting to have all logs within stackdriver automatically export to a big query dataset through terraform.
I have currently defined a logging sink referencing and a big query dataset; which the logging sink references. However the dataset appears to be empty. Is there something I am missing here?
This is what my terraform code looks like currently:
resource "google_bigquery_dataset" "stackdriver_logging" {
dataset_id = "stackdriver_logs"
friendly_name = "stackdriver_logs"
location = "US"
project = google_project.project.project_id
}
resource "google_logging_project_sink" "big_query" {
name = "${google_project.project.project_id}-_big_query-sink"
project = google_project.project.project_id
destination = "bigquery.googleapis.com/projects/${google_project.project.project_id}/datasets/${google_bigquery_dataset.stackdriver_logging.dataset_id}"
unique_writer_identity = true
}
resource "google_project_iam_member" "bq_log_writer" {
member = google_logging_project_sink.big_query.writer_identity
role = "roles/bigquery.dataEditor"
project = google_project.project.project_id
}