Dependency between pubsub topic and subscription using terraform script - google-cloud-platform

I am using one terraform script to create a pub sub topic and subscription. If the subscription needs to subscribes from the topic created by the same script, is there a way to create a dependency such that terraform attempts to create the pub/sub subscription only after the topic is created?
My main file looks like this :
version = ""
project = var.project_id
region = var.region
zone = var.zone
}
# module "Dataflow" {
#source = "../modules/cloud-dataflow"
#}
module "PubSubTopic" {
source = "../modules/pubsub_topic"
}
#module "PubSubSubscription" {
# source = "../modules/pubsub_subscription"
#}
#module "CloudFunction" {
# source = "../modules/cloud-function"
#}

Terraform will attempt to create the resources following the proper order but to answer your question and what your looking for is modules dependency "depends_on".
For example, subscription module will be created only once topic resource has been already created. That way you should add the depends_on on the subscription module.
Example:
resource "aws_iam_policy_attachment" "example" {
name = "example"
roles = [aws_iam_role.example.name]
policy_arn = aws_iam_policy.example.arn
}
module "uses-role" {
# ...
depends_on = [aws_iam_policy_attachment.example]
}
Official documentation: https://www.terraform.io/docs/language/meta-arguments/depends_on.html

You can create a simple pubsub topic and a subscription with this snippet (just add the .json for a service account with enough privileges) on your filesystem:
provider "google" {
credentials = "${file("account.json")}" # Or use GOOGLE_APPLICATION_CREDENTIALS
project = "__your_project_id__"
region = "europe-west4" # Amsterdam
}
resource "google_pubsub_topic" "incoming_data" {
name = "incoming-data"
}
resource "google_pubsub_subscription" "incoming_subs" {
name = "Subscription_for_incoming_data"
topic = google_pubsub_topic.incoming_data.name
# Time since Pubsub receives a message to deletion.
expiration_policy {
ttl = "300000s"
}
# Time from client reception to ACK
message_retention_duration = "1200s"
retain_acked_messages = false
enable_message_ordering = false
}
To link a subscription with a topic in terraform, you just need to link it with:
topic = google_pubsub_topic.TERRAFORM_TOPIC.name
Be carefull with Google requirements for topic and subscription identifiers. If they're not valid, terraform plan will pass, but you'll get an Error 400 : You have passed an invalid argument to the service

Related

Triggering a google_cloudbuild_trigger from terraform to create a google_storage_bucket_object

I have the following setup:
A google_cloudbuild_trigger that runs on the latest github code and builds and uploads the build to a dataflow flex artifact location (on google storage)
A dataflex template job that depends on the artifact being present.
I want to configure terraform so that if the artifact is not present, then automatically trigger the google_cloudbuild_trigger and wait for it to complete. If the artifact is present, then just continue using it.
Is this even possible in terraform ?
Snippets of my terraform script:
The following is the cloudbuild trigger:
resource "google_cloudbuild_trigger" "build_pipeline" {
name = "build_pipeline"
github {
owner = "my-org"
name = "my-project"
push {
branch = "^my-branch$"
}
}
filename = "path/cloudbuild.yaml"
substitutions = {
_PROJECT_ID = var.google_project_id
}
}
The following is the dataflow flex template job:
resource "google_dataflow_flex_template_job" "dataflow_job" {
provider = google-beta
name = "dataflow_job"
container_spec_gcs_path = "${google_storage_bucket.project_store.url}/path/to/flex/template.json"
project = var.google_project_id
depends_on = [google_bigquery_table.tables]
parameters = { ... }
}
I have tried creating a simple "data" resource like:
data "google_storage_bucket_object" "picture" {
name = "path/to/flex/template.json"
bucket = google_storage_bucket.project_store.name
}
But I cannot figure out how to change this into something that triggers the google_cloudbuild_trigger.build_pipeline if the data resource doesn't exist.
Something like:
data "google_storage_bucket_object" "picture" {
name = "path/to/flex/template.json"
bucket = google_storage_bucket.project_store.name
if_does_not_exist_trigger = google_cloudbuild_trigger.build_pipeline
}

issue while deploying gcp cloud function deployment

I have following issue while deploying the cloud function (I am completely new to gcp and terraform )
I am trying to deploy a cloud function through the terraform; but the issue is that when I am deploying its destroying an existing cloud function which was already deployed in gcp (deployed by other colleague) even though cloud function name , bucket object name and archive file name are different (only bucket name and project id are same)
looks like its taking the state of existing cloud function which is already deployed
is there any way to keep the existing state unaffected?
code Snippet(as mentioned above there is already one cloud function deployed with same project id and bucket)
main.tf:
provider "google" {
project = "peoject_id"
credentials = "cdenetialfile"
region = "some-region"
}
locals {
timestamp = formatdate("YYMMDDhhmmss", timestamp())
root_dir = abspath("./app/")
}
data "archive_file" "archive" {
type = "zip"
output_path = "/tmp/function-${local.timestamp}.zip"
source_dir = local.root_dir
}
resource "google_storage_bucket_object" "object_archive" {
name = "archive-${local.timestamp}.zip"
bucket = "dev-bucket-tfstate"
source = "/tmp/function-${local.timestamp}.zip"
depends_on = [data.archive_file.archive]
}
resource "google_cloudfunctions_function" "translator_function" {
name = "Cloud_functionname"
available_memory_mb = 256
timeout = 61
runtime = "java11"
source_archive_bucket = "dev-bucket-tfstate"
source_archive_object = google_storage_bucket_object.object_archive.name
entry_point = "com.test.controller.myController"
event_trigger {
event_type = "google.pubsub.topic.publish"
resource = "topic_name"
}
}
backend.tf
terraform {
backend "gcs" {
bucket = "dev-bucket-tfstate"
credentials = "cdenetialfile"
}
}

How to make gcp cloud function public using Terraform

I will start by saying I am very new to both GCP and Terraform, so I hope there is a simple answer that I have just overlooked.
I am trying to create a GCP cloud function and then make it public using Terraform. I am able to create the function but not make it public, despite closely following the documentation's example: https://www.terraform.io/docs/providers/google/r/cloudfunctions_function.html
I receive the error "googleapi: Error 403: Permission 'cloudfunctions.functions.setIamPolicy' denied on resource ... (or resource may not exist)" when the google_cloudfunctions_function_iam_member resource is reached.
How can I make this function public? Does it have something to do with the account/api key I am using for credentials to create all these resources?
Thanks in advance.
my main.tf file:
provider "google" {
project = "my-project"
credentials = "key.json" #compute engine default service account api key
region = "us-central1"
}
terraform {
backend "gcs" {
bucket = "manually-created-bucket"
prefix = "terraform/state"
credentials = "key.json"
}
}
# create the storage bucket for our scripts
resource "google_storage_bucket" "source_code" {
name = "test-bucket-lh05111992"
location = "us-central1"
force_destroy = true
}
# zip up function source code
data "archive_file" "my_function_script_zip" {
type = "zip"
source_dir = "../source/scripts/my-function-script"
output_path = "../source/scripts/my-function-script.zip"
}
# add function source code to storage
resource "google_storage_bucket_object" "my_function_script_zip" {
name = "index.zip"
bucket = google_storage_bucket.source_code.name
source = "../source/scripts/my-function-script.zip"
}
#create the cloudfunction
resource "google_cloudfunctions_function" "function" {
name = "send_my_function_script"
description = "This function is called in GTM. It sends a users' google analytics id to BigQuery."
runtime = "nodejs10"
available_memory_mb = 128
source_archive_bucket = google_storage_bucket.source_code.name
source_archive_object = google_storage_bucket_object.my_function_script_zip.name
trigger_http = true
entry_point = "handleRequest"
}
# IAM entry for all users to invoke the function
resource "google_cloudfunctions_function_iam_member" "invoker" {
project = google_cloudfunctions_function.function.project
region = "us-central1"
cloud_function = google_cloudfunctions_function.function.name
role = "roles/cloudfunctions.invoker"
member = "allUsers"
}
It seems the only problem with that example from the terraform site are the " Cloud Functions IAM resources" which have been modified since Nov 2019. Now you have to specify these resources as explained here. Now for your user case (public cloud function) I'd recommend you to follow this configuration and just change the "members" attribute to "allUsers" so it'd be something like this
resource "google_cloudfunctions_function_iam_binding" "binding" {
project = google_cloudfunctions_function.function.project
region = google_cloudfunctions_function.function.region
cloud_function = google_cloudfunctions_function.function.name
role = "roles/cloudfunctions.invoker"
members = [
"allUsers",
]
}
Finally, you can give it a test and modify the functions you've already created here at the #Try this API" right panel and enter the proper resource and request body like this (make sure to enter the "resource" parameter correcly):
{
"policy": {
"bindings": [
{
"members": [
"allUsers"
],
"role": "roles/cloudfunctions.invoker"
}
]
}
}
In addition to adjusting the IAM roles how #chinoche suggested, I also discovered that I needed to modify the service account I was using to give it poject owner permissions. (I guess the default one I was using didn't have this). I updated my key.json and it finally worked.

How to export all logs from stackdriver into big query through terraform

I'll preface this by saying I am very new to GCP, stackdriver and big query.
I'm attempting to have all logs within stackdriver automatically export to a big query dataset through terraform.
I have currently defined a logging sink referencing and a big query dataset; which the logging sink references. However the dataset appears to be empty. Is there something I am missing here?
This is what my terraform code looks like currently:
resource "google_bigquery_dataset" "stackdriver_logging" {
dataset_id = "stackdriver_logs"
friendly_name = "stackdriver_logs"
location = "US"
project = google_project.project.project_id
}
resource "google_logging_project_sink" "big_query" {
name = "${google_project.project.project_id}-_big_query-sink"
project = google_project.project.project_id
destination = "bigquery.googleapis.com/projects/${google_project.project.project_id}/datasets/${google_bigquery_dataset.stackdriver_logging.dataset_id}"
unique_writer_identity = true
}
resource "google_project_iam_member" "bq_log_writer" {
member = google_logging_project_sink.big_query.writer_identity
role = "roles/bigquery.dataEditor"
project = google_project.project.project_id
}

Terraform AWS Cognito App Client

Currently stuck in the mud with trying to to set up an 'app client' for an AWS Cognito User Pool through Terraform. Here is my resource as it stands:
resource "aws_cognito_user_pool" "notes-pool" {
name = "notes-pool"
username_attributes = ["email"]
verification_message_template {
default_email_option = "CONFIRM_WITH_CODE"
}
password_policy {
minimum_length = 10
require_lowercase = false
require_numbers = true
require_symbols = false
require_uppercase = true
}
tags {
"Name" = "notes-pool"
"Environment" = "production"
}
}
The above works just fine, and my user pool is created. If anybody has any ideas on how to create an app client in the same resource, I'm all ears. I'm beginning to suspect that this functionality doesn't exist!
I believe this was just added to the most recent verison of terraform. You could do something like the following to add a client to your user pool:
resource "aws_cognito_user_pool_client" "client" {
name = "client"
user_pool_id = "${aws_cognito_user_pool.pool.id}"
generate_secret = true
explicit_auth_flows = ["ADMIN_NO_SRP_AUTH"]
}
See here for the docs:Terraform entry on aws_cognito_user_pool_client
UPDATE - this is now supported by terraform. See #cyram's answer.
This feature is not currently supported by Terraform.
There is an open issue on GitHub where this has been requested (give it a thumbs up if you would benefit from this feature).
Until support is added, the best option is to use the local-exec provisioner to create the user pool via the CLI once the resource is created:
resource "aws_cognito_user_pool" "notes-pool" {
name = "notes-pool"
username_attributes = ["email"]
...
provisioner "local-exec" {
command = <<EOF
aws cognito-idp create-user-pool-client \
--user-pool-id ${aws_cognito_user_pool.notes-pool.id} \
--client-name client-name \
--no-generate-secret \
--explicit-auth-flows ADMIN_NO_SRP_AUTH
EOF
}
}
Please note that in order to use this you must have the AWS CLI installed and authenticated (I use environment variables to authenticate with both Terraform and the AWS CLI).
Once user pool is created, you can use create-user-pool-client API to create app-client within the userpool. Please refer the API documentation: https://docs.aws.amazon.com/cognito-user-identity-pools/latest/APIReference/API_CreateUserPoolClient.html