issue while deploying gcp cloud function deployment - google-cloud-platform

I have following issue while deploying the cloud function (I am completely new to gcp and terraform )
I am trying to deploy a cloud function through the terraform; but the issue is that when I am deploying its destroying an existing cloud function which was already deployed in gcp (deployed by other colleague) even though cloud function name , bucket object name and archive file name are different (only bucket name and project id are same)
looks like its taking the state of existing cloud function which is already deployed
is there any way to keep the existing state unaffected?
code Snippet(as mentioned above there is already one cloud function deployed with same project id and bucket)
main.tf:
provider "google" {
project = "peoject_id"
credentials = "cdenetialfile"
region = "some-region"
}
locals {
timestamp = formatdate("YYMMDDhhmmss", timestamp())
root_dir = abspath("./app/")
}
data "archive_file" "archive" {
type = "zip"
output_path = "/tmp/function-${local.timestamp}.zip"
source_dir = local.root_dir
}
resource "google_storage_bucket_object" "object_archive" {
name = "archive-${local.timestamp}.zip"
bucket = "dev-bucket-tfstate"
source = "/tmp/function-${local.timestamp}.zip"
depends_on = [data.archive_file.archive]
}
resource "google_cloudfunctions_function" "translator_function" {
name = "Cloud_functionname"
available_memory_mb = 256
timeout = 61
runtime = "java11"
source_archive_bucket = "dev-bucket-tfstate"
source_archive_object = google_storage_bucket_object.object_archive.name
entry_point = "com.test.controller.myController"
event_trigger {
event_type = "google.pubsub.topic.publish"
resource = "topic_name"
}
}
backend.tf
terraform {
backend "gcs" {
bucket = "dev-bucket-tfstate"
credentials = "cdenetialfile"
}
}

Related

Triggering a google_cloudbuild_trigger from terraform to create a google_storage_bucket_object

I have the following setup:
A google_cloudbuild_trigger that runs on the latest github code and builds and uploads the build to a dataflow flex artifact location (on google storage)
A dataflex template job that depends on the artifact being present.
I want to configure terraform so that if the artifact is not present, then automatically trigger the google_cloudbuild_trigger and wait for it to complete. If the artifact is present, then just continue using it.
Is this even possible in terraform ?
Snippets of my terraform script:
The following is the cloudbuild trigger:
resource "google_cloudbuild_trigger" "build_pipeline" {
name = "build_pipeline"
github {
owner = "my-org"
name = "my-project"
push {
branch = "^my-branch$"
}
}
filename = "path/cloudbuild.yaml"
substitutions = {
_PROJECT_ID = var.google_project_id
}
}
The following is the dataflow flex template job:
resource "google_dataflow_flex_template_job" "dataflow_job" {
provider = google-beta
name = "dataflow_job"
container_spec_gcs_path = "${google_storage_bucket.project_store.url}/path/to/flex/template.json"
project = var.google_project_id
depends_on = [google_bigquery_table.tables]
parameters = { ... }
}
I have tried creating a simple "data" resource like:
data "google_storage_bucket_object" "picture" {
name = "path/to/flex/template.json"
bucket = google_storage_bucket.project_store.name
}
But I cannot figure out how to change this into something that triggers the google_cloudbuild_trigger.build_pipeline if the data resource doesn't exist.
Something like:
data "google_storage_bucket_object" "picture" {
name = "path/to/flex/template.json"
bucket = google_storage_bucket.project_store.name
if_does_not_exist_trigger = google_cloudbuild_trigger.build_pipeline
}

Terraform GCP executes resources in wrong order

I have this main.tf file:
provider "google" {
project = var.projNumber
region = var.regName
zone = var.zoneName
}
resource "google_storage_bucket" "bucket_for_python_application" {
name = "python_bucket_exam"
location = var.regName
force_destroy = true
}
resource "google_storage_bucket_object" "file-hello-py" {
name = "src/hello.py"
source = "app-files/src/hello.py"
bucket = "python_bucket_exam"
}
resource "google_storage_bucket_object" "file-main-py" {
name = "main.py"
source = "app-files/main.py"
bucket = "python_bucket_exam"
}
When executed first time It worked fine, but after terraform destroy and again terraform plan -> terraform apply I've noticed that terraform tries to create object before actually creating a bucket:
Ofc it cant't create object inside something that does'nt exist. Why is that?
You have to create a dependency between your objects and your bucket (see code below). Otherwise, Terraform won't know that it has to create bucket first, and then objects. This is related to how Terraform stores the resources in a directed graph.
resource "google_storage_bucket_object" "file-hello-py" {
name = "src/hello.py"
source = "app-files/src/hello.py"
bucket = google_storage_bucket.bucket_for_python_application.name
}
resource "google_storage_bucket_object" "file-main-py" {
name = "main.py"
source = "app-files/main.py"
bucket = google_storage_bucket.bucket_for_python_application.name
}
By doing this, you declare an implicit order : bucket, then objects. This is equivalent to using depends_on in your google_storage_bucket_objects, but in that particular case I recommend using a reference to your bucket in your objects, rather than using an explicit depends_on.

Creating endpoint in cloud run with Terraform and Google Cloud Platform

I'm research for a way to use Terraform with GCP provider to create cloud run endpoint. For starter I'm creating testing data a simple hello world. I have resource cloud run service configured and cloud endpoints resource configured with cloud endpoints depends_on cloud run. However, I'm trying to pass in the cloud run url as a service name to the cloud endpoints. File are constructed with best practice, with module > cloud run and cloud endpoints resource. However, the Terraform interpolation for passing the output of
service_name = "${google_cloud_run_service.default.status[0].url}"
Terraform throughs an Error: Invalid character. I've also tried module.folder.output.url.
I have the openapi_config.yml hardcoded in the TF config within
I'm wondering if it's possible to have to work. I research many post and some forum are outdated.
#Cloud Run
resource "google_cloud_run_service" "default" {
name = var.name
location = var.location
template {
spec {
containers {
image = "gcr.io/cloudrun/hello"
}
}
metadata {
annotations = {
"autoscaling.knative.dev/maxScale" = "1000"
"run.googleapis.com/cloudstorage" = "project_name:us-central1:${google_storage_bucket.storage-run.name}"
"run.googleapis.com/client-name" = "terraform"
}
}
}
traffic {
percent = 100
latest_revision = true
}
autogenerate_revision_name = true
}
output "url" {
value = "${google_cloud_run_service.default.status[0].url}"
}
data "google_iam_policy" "noauth" {
binding {
role = "roles/run.invoker"
members = [
"allUsers",
]
}
}
resource "google_cloud_run_service_iam_policy" "noauth" {
location = google_cloud_run_service.default.location
project = google_cloud_run_service.default.project
service = google_cloud_run_service.default.name
policy_data = data.google_iam_policy.noauth.policy_data
}
#CLOUD STORAGE
resource "google_storage_bucket" "storage-run" {
name = var.name
location = var.location
force_destroy = true
bucket_policy_only = true
}
data "template_file" "openapi_spec" {
template = file("${path.module}/openapi_spec.yml")
}
#CLOUD ENDPOINT SERVICE
resource "google_endpoints_service" "api-service" {
service_name = "api_name.endpoints.project_name.cloud.goog"
project = var.project
openapi_config = data.template_file.openapi_spec.rendered
}
ERROR: googleapi: Error 400: Service name 'CLOUD_RUN_ESP_NAME' provided in the config files doesn't match the service name 'api_name.endpoints.project_name.cloud.goog' provided in the request., badRequest
So I later discovered, that the service name must match the same as the host/cloud run esp service url without https:// in order for the cloud endpoint services to provisioner. Terraform docs states otherwise in the form of " $apiname.endpoints.$projectid.cloud.goog " terraform_cloud_endpoints and in GCP docs states that the cloud run ESP service must be the url without https:// > gateway-12345-uc.a.run.app
Getting Started with Endpoints for Cloud Run

How to make gcp cloud function public using Terraform

I will start by saying I am very new to both GCP and Terraform, so I hope there is a simple answer that I have just overlooked.
I am trying to create a GCP cloud function and then make it public using Terraform. I am able to create the function but not make it public, despite closely following the documentation's example: https://www.terraform.io/docs/providers/google/r/cloudfunctions_function.html
I receive the error "googleapi: Error 403: Permission 'cloudfunctions.functions.setIamPolicy' denied on resource ... (or resource may not exist)" when the google_cloudfunctions_function_iam_member resource is reached.
How can I make this function public? Does it have something to do with the account/api key I am using for credentials to create all these resources?
Thanks in advance.
my main.tf file:
provider "google" {
project = "my-project"
credentials = "key.json" #compute engine default service account api key
region = "us-central1"
}
terraform {
backend "gcs" {
bucket = "manually-created-bucket"
prefix = "terraform/state"
credentials = "key.json"
}
}
# create the storage bucket for our scripts
resource "google_storage_bucket" "source_code" {
name = "test-bucket-lh05111992"
location = "us-central1"
force_destroy = true
}
# zip up function source code
data "archive_file" "my_function_script_zip" {
type = "zip"
source_dir = "../source/scripts/my-function-script"
output_path = "../source/scripts/my-function-script.zip"
}
# add function source code to storage
resource "google_storage_bucket_object" "my_function_script_zip" {
name = "index.zip"
bucket = google_storage_bucket.source_code.name
source = "../source/scripts/my-function-script.zip"
}
#create the cloudfunction
resource "google_cloudfunctions_function" "function" {
name = "send_my_function_script"
description = "This function is called in GTM. It sends a users' google analytics id to BigQuery."
runtime = "nodejs10"
available_memory_mb = 128
source_archive_bucket = google_storage_bucket.source_code.name
source_archive_object = google_storage_bucket_object.my_function_script_zip.name
trigger_http = true
entry_point = "handleRequest"
}
# IAM entry for all users to invoke the function
resource "google_cloudfunctions_function_iam_member" "invoker" {
project = google_cloudfunctions_function.function.project
region = "us-central1"
cloud_function = google_cloudfunctions_function.function.name
role = "roles/cloudfunctions.invoker"
member = "allUsers"
}
It seems the only problem with that example from the terraform site are the " Cloud Functions IAM resources" which have been modified since Nov 2019. Now you have to specify these resources as explained here. Now for your user case (public cloud function) I'd recommend you to follow this configuration and just change the "members" attribute to "allUsers" so it'd be something like this
resource "google_cloudfunctions_function_iam_binding" "binding" {
project = google_cloudfunctions_function.function.project
region = google_cloudfunctions_function.function.region
cloud_function = google_cloudfunctions_function.function.name
role = "roles/cloudfunctions.invoker"
members = [
"allUsers",
]
}
Finally, you can give it a test and modify the functions you've already created here at the #Try this API" right panel and enter the proper resource and request body like this (make sure to enter the "resource" parameter correcly):
{
"policy": {
"bindings": [
{
"members": [
"allUsers"
],
"role": "roles/cloudfunctions.invoker"
}
]
}
}
In addition to adjusting the IAM roles how #chinoche suggested, I also discovered that I needed to modify the service account I was using to give it poject owner permissions. (I guess the default one I was using didn't have this). I updated my key.json and it finally worked.

AWS Beanstalk Tomcat and Terraform

I try to set up a Tomcat using Beanstalk.
Here's my Terraform code:
(bucket is created beforehand)
# Upload the JAR to bucket
resource "aws_s3_bucket_object" "myjar" {
bucket = "${aws_s3_bucket.mybucket.id}"
key = "src/java-tomcat-v3.zip"
source = "${path.module}/src/java-tomcat-v3.zip"
etag = "${md5(file("${path.module}/src/java-tomcat-v3.zip"))}"
}
# Define app
resource "aws_elastic_beanstalk_application" "tftestapp" {
name = "tf-test-name"
description = "tf-test-desc"
}
# Define beanstalk jar version
resource "aws_elastic_beanstalk_application_version" "myjarversion" {
name = "tf-test-version-label"
application = "tf-test-name"
description = "My description"
bucket = "${aws_s3_bucket.mybucket.id}"
key = "${aws_s3_bucket_object.myjar.id}"
force_delete = true
}
# Deploy env
resource "aws_elastic_beanstalk_environment" "tftestenv" {
name = "tf-test-name"
application = "${aws_elastic_beanstalk_application.tftestapp.name}"
solution_stack_name = "64bit Amazon Linux 2018.03 v3.0.0 running Tomcat 7 Java 7"
setting {
namespace = "aws:autoscaling:asg"
name = "MinSize"
value = "1"
}
...
}
And I end up with a very strange error, saying it can't find the file on the bucket.
InvalidParameterCombination: Unable to download from S3 location
(Bucket: mybucket Key: src/java-tomcat-v3.zip). Reason: Not Found
Nevertheless, connecting to the web console and accessing my bucket, I can see the zip file is right there...
I don't get it, any help please?
PS: I tried with and without the src/
Cheers
I was recently having this same error on Terraform 0.13.
Differences between 0.13 and older versions:
The documentation appears to be out of date. For instance, under aws_elastic_beanstalk_application_version it shows
resource "aws_s3_bucket" "default" {
bucket = "tftest.applicationversion.bucket"
}
resource "aws_s3_bucket_object" "default" {
bucket = aws_s3_bucket.default.id
key = "beanstalk/go-v1.zip"
source = "go-v1.zip"
}
resource "aws_elastic_beanstalk_application" "default" {
name = "tf-test-name"
description = "tf-test-desc"
}
resource "aws_elastic_beanstalk_application_version" "default" {
name = "tf-test-version-label"
application = "tf-test-name"
description = "application version created by terraform"
bucket = aws_s3_bucket.default.id
key = aws_s3_bucket_object.default.id
}
If you attempt to use this, terraform fails with the bucket object because the "source" argument is no longer available within aws_elastic_beanstalk_application_version.
After removing the "source" property, it moved to the next issue, which was Error: InvalidParameterCombination: Unable to download from S3 location (Bucket: mybucket Key: mybucket/myfile.txt). Reason: Not Found
This error comes from the terraform:
resource "aws_s3_bucket" "bucket" {
bucket = "mybucket"
}
resource "aws_s3_bucket_object" "default" {
bucket = aws_s3_bucket.bucket.id
key = "myfile.txt"
}
resource "aws_elastic_beanstalk_application" "default" {
name = "tf-test-name"
description = "tf-test-desc"
}
resource "aws_elastic_beanstalk_application_version" "default" {
name = "tf-test-version-label"
application = "tf-test-name"
description = "application version created by terraform"
bucket = aws_s3_bucket.bucket.id
key = aws_s3_bucket_object.default.id
}
What Terraform ends up doing here is it prepends the bucket to the key. When you run terraform plan you see that bucket = "mybucket" and key = "mybucket/myfile.txt". The problem with this is that Terraform looks in the bucket for the file "mybucket/myfile.txt" when it should ONLY be looking for "myfile.txt"
Solution
What I did was REMOVE the bucket and bucket object resources from the script and place the names in variables, as follows:
variable "sourceCodeS3BucketName" {
type = string
description = "The bucket that contains the engine code."
default = "mybucket"
}
variable "sourceCodeFilename" {
type = string
description = "The code file name."
default = "myfile.txt"
}
resource "aws_elastic_beanstalk_application" "myApp" {
name = "my-beanstalk-app"
description = "My application"
}
resource "aws_elastic_beanstalk_application_version" "v1_0_0" {
name = "my-application-v1_0_0"
application = aws_elastic_beanstalk_application.myApp.name
description = "Application v1.0.0"
bucket = var.sourceCodeS3BucketName
key = var.sourceCodeFilename
}
By directly using the name of the file and the bucket, Terraform does not prepend the bucket name to the key, and it can find the file just fine.