https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/eventarc_trigger
I was looking to create a terraform file to help deploy an eventarc with its destination as a cloud function. On the terraform docs it says it was not available to configure. Does this mean I can only deploy the eventarc with a generic cloud function and would need to configure the rest through the GUI? Or is there another solution I can try to fully deploy it through terraform?
Or maybe can I try to create it through deploying a cloud function through terraform? If so how would I code the event_trigger block for the eventarc? Below I am guessing what it would be?
resource "google_cloudfunctions_function" "cloudfunc-name" {
name = "cloudfunc-name"
description = "cloud func desc"
runtime = "python39"
project = "googleproject"
region = "us-central1"
available_memory_mb = 256
max_instances = 10
timeout = 300
entry_point = "helloworld_entry"
source_archive_bucket = "filepath_to_bucket"
source_archive_object = google_storage_bucket_object.function_zip_bucket_object.name
event_trigger {
event_type = "google.cloud.audit.log.v1.written"
event = "google.cloud.bigquery.v2.TableService.InsertTable"
receive_event = "us-central1"
service_account = "someserviceaccount#gserviceaccount.com"
failure_policy {
retry = false
}
}
Eventarc can only trigger Cloud Functions 2nd generation. Why? Because Cloud Functions 2nd gen is backed on Cloud Run.
So, you have to use the 2nd gen terraform module to deploy your function and then use the Cloud Functions section on the eventarc. I didn't test recently. In any cases, if it don't work with the Cloud Functions config on event arc, you can replace it with Cloud Run config (provide the name of the functions (which is the name of the Cloud Run service also))
Related
Can we automate GCP billing export into BQ through Terraform?
I tried below terraform code but it's not working. So, not sure if GCP billing exporting into BQ would be possible through Terraform or not.
resource "google_logging_billing_account_sink" "billing-sink" {
name = "billing-sink"
description = "Billing export"
billing_account = "**********"
unique_writer_identity = true
destination = "bigquery.googleapis.com/projects/${var.project_name}/datasets/${google_bigquery_dataset.billing_export.dataset_id}"
}
resource "google_project_iam_member" "log_writer" {
project = var.project_name
role = "roles/bigquery.dataEditor"
member = google_logging_billing_account_sink.billing-sink.writer_identity
}
Unfortunately,there is no such option. This concern is already raised under github and this is in enhancement. Currently there is no ETA available. I can see in terraform only google_logging_billing_account_sink and Automating logs export to BigQuery with Terraform.
I know this is possible through the AWS CLI and Console as I have done it like this but I would now need to do it in Terraform. I would like to execute the equivalent of the CLI command as aws servicediscovery register-instance.
Pointing to any documentation or examples that can be shared would be most beneficial and appreciated.
This is now possible using the aws_service_discovery_instance resource as of version v3.57.0 of the AWS provider.
resource "aws_service_discovery_instance" "example" {
instance_id = "mydb"
service_id = aws_service_discovery_service.example.id
attributes = {
AWS_INSTANCE_CNAME = aws_db_instance.example.address
}
}
Adding instances to the discovery service is not yet supported:
Add an aws_service_discovery_instance resource
But pull requests has already been preprared for that, so hopefully soon:
resource/aws_service_discovery_instance: new implementation
I am planning to use terraform to deploy to GCP and I have read the instruction on how to set it up:
provider "google" {
project = "{{YOUR GCP PROJECT}}"
region = "us-central1"
zone = "us-central1-c"
}
it requires a project name in the provider configuration. But I am planning to create the project via terraform like below code:
resource "google_project" "my_project" {
name = "My Project"
project_id = "your-project-id"
org_id = "1234567"
}
how can I use terraform without a pre-created project?
Take a look on this tutorial (from Community):
Creating Google Cloud projects with Terraform
This tutorial assumes that you already have a Google Cloud account set up for your organization and that you are allowed to make organization-level changes in the account
First step,for example, is to setup your ENV variables with your Organization ID and your billing account ID which will allow you to create the projects using terraform:
export TF_VAR_org_id=YOUR_ORG_ID
export TF_VAR_billing_account=YOUR_BILLING_ACCOUNT_ID
export TF_ADMIN=${USER}-terraform-admin
export TF_CREDS=~/.config/gcloud/${USER}-terraform-admin.json
Terraform now supports cloud run as documented here,
and I'm trying the example code below.
resource "google_cloud_run_service" "default" {
name = "tftest-cloudrun"
location = "us-central1"
provider = "google-beta"
metadata {
namespace = "my-project-name"
}
spec {
containers {
image = "gcr.io/cloudrun/hello"
}
}
}
Although it deploys the sample hello service with no error, when I access to the auto-generated URL, it returns 403(Forbidden) response.
Is it possible to create public cloud run api using terraform?
(When I'm creating the same service using GUI, GCP provides "Allow unauthenticated invocations" option under "Authentication" section, but there seems to be no equivalent option in terraform document...)
Just add the following code to your terraform script, which will make it publicly accessable
data "google_iam_policy" "noauth" {
binding {
role = "roles/run.invoker"
members = [
"allUsers",
]
}
}
resource "google_cloud_run_service_iam_policy" "noauth" {
location = google_cloud_run_service.default.location
project = google_cloud_run_service.default.project
service = google_cloud_run_service.default.name
policy_data = data.google_iam_policy.noauth.policy_data
}
You can also find this here
Here the deployment is only based on Knative serving spec. Cloud Run managed implements these specs but have its own internal behavior, like role check linked with IAM (not possible with Knative and a K8S cluster, this is replaced by Private/Public service). The namespace on Cloud Run managed is the projectId, a workaround to identify the project for example, not a real K8S namespace.
So, the latest news that I have from Google (I'm Cloud Run Alpha Tester) which tells they are working with Deployment Manager and Terraform for integrating Cloud Run in them. I don't have deadline, sorry.
I'm setting up some Terraform to manage a lambda and s3 bucket with versioning on the contents of the s3. Creating the first version of the infrastructure is fine. When releasing a second version, terraform replaces the zip file instead of creating a new version.
I've tried adding versioning to the s3 bucket in terraform configuration and moving the api-version to a variable string.
data "archive_file" "lambda_zip" {
type = "zip"
source_file = "main.js"
output_path = "main.zip"
}
resource "aws_s3_bucket" "lambda_bucket" {
bucket = "s3-bucket-for-tft-project"
versioning {
enabled = true
}
}
resource "aws_s3_bucket_object" "lambda_zip_file" {
bucket = "${aws_s3_bucket.lambda_bucket.bucket}"
key = "v${var.api-version}-${data.archive_file.lambda_zip.output_path}"
source = "${data.archive_file.lambda_zip.output_path}"
}
resource "aws_lambda_function" "lambda_function" {
s3_bucket = "${aws_s3_bucket.lambda_bucket.bucket}"
s3_key = "${aws_s3_bucket_object.lambda_zip_file.key}"
function_name = "lambda_test_with_s3_version"
role = "${aws_iam_role.lambda_exec.arn}"
handler = "main.handler"
runtime = "nodejs8.10"
}
I would expect the output to be another zip file but with the lambda now pointing at the new version, with the ability to change back to the old version if var.api-version was changed.
Terraform isn't designed for creating this sort of "artifact" object where each new version should be separate from the ones before it.
The data.archive_file data source was added to Terraform in the early days of AWS Lambda when the only way to pass values from Terraform into a Lambda function was to retrieve the intended zip artifact, amend it to include additional files containing those settings, and then write that to Lambda.
Now that AWS Lambda supports environment variables, that pattern is no longer recommended. Instead, deployment artifacts should be created by some separate build process outside of Terraform and recorded somewhere that Terraform can discover them. For example, you could use SSM Parameter Store to record your current desired version and then have Terraform read that to decide which artifact to retrieve:
data "aws_ssm_parameter" "lambda_artifact" {
name = "lambda_artifact"
}
locals {
# Let's assume that this SSM parameter contains a JSON
# string describing which artifact to use, like this
# {
# "bucket": "s3-bucket-for-tft-project",
# "key": "v2.0.0/example.zip"
# }
lambda_artifact = jsondecode(data.aws_ssm_parameter.lambda_artifact)
}
resource "aws_lambda_function" "lambda_function" {
s3_bucket = local.lambda_artifact.bucket
s3_key = local.lambda_artifact.key
function_name = "lambda_test_with_s3_version"
role = aws_iam_role.lambda_exec.arn
handler = "main.handler"
runtime = "nodejs8.10"
}
This build/deploy separation allows for three different actions, whereas doing it all in Terraform only allows for one:
To release a new version, you can run your build process (in a CI system, perhaps) and have it push the resulting artifact to S3 and record it as the latest version in the SSM parameter, and then trigger a Terraform run to deploy it.
To change other aspects of the infrastructure without deploying a new function version, just run Terraform without changing the SSM parameter and Terraform will leave the Lambda function untouched.
If you find that a new release is defective, you can write the location of an older artifact into the SSM parameter and run Terraform to deploy that previous version.
A more complete description of this approach is in the Terraform guide Serverless Applications with AWS Lambda and API Gateway, which uses a Lambda web application as an example but can be applied to many other AWS Lambda use-cases too. Using SSM is just an example; any data that Terraform can retrieve using a data source can be used as an intermediary to decouple the build and deploy steps from one another.
This general idea can apply to all sorts of code build artifacts as well as Lambda zip files. For example: custom AMIs created with HashiCorp Packer, Docker images created using docker build. Separating the build process, the version selection mechanism, and the deployment process gives a degree of workflow flexibility that can support both the happy path and any exceptional paths taken during incidents.