Why can't I deploy to Cloud Functions? - google-cloud-platform

I've been able to deploy for months and now suddenly this morning I am getting this error.
│ Error: Error while updating cloudfunction configuration: Error waiting for Updating CloudFunctions Function: Error code 3, message: Build failed: curl: (22) The requested URL returned error: 404
│
│ gzip: stdin: unexpected end of file
│ tar: Child returned status 1
│ tar: Error is not recoverable: exiting now; Error ID: 637fe2a4
│
│ with google_cloudfunctions_function.syncFiles,
│ on functions.tf line 396, in resource "google_cloudfunctions_function" "syncFiles":
│ 396: resource "google_cloudfunctions_function" "syncFiles" {
│
This is the terraform configuration. We zip the directory and give this to cloud functions to deploy
data "archive_file" "source-zip" {
type = "zip"
source_dir = "${path.root}/../dist/"
output_path = "${path.root}/../dist/files/${var.app_name}.zip"
excludes = ["files/**"]
}
resource "google_storage_bucket_object" "deploy-zip" {
name = "${var.app_name}/${var.app_name}-${data.archive_file.source-zip.output_md5}.zip"
bucket = "${var.env_name}-deploy"
source = "${path.root}/../dist/files/${var.app_name}.zip"
depends_on = [data.archive_file.source-zip]
}
output "deploy_zip" {
value = google_storage_bucket_object.deploy-zip.name
}
What could cause this error?
Is this an internal problem?
I have a ticket open with Google support but nothing useful yet.

Please go to Cloud build, select your region, look at history/logs, that should tell you what is failing.
Possibly a package issue.

Related

Terraform AWS - Unable to update Transfer Server with incomplete error message

I am trying to update a test AWS Transfer Server because I was unable to connect to it via SFTP
Now trying to use the FTP / FTPS protocols, I have used the same layout as the example here
This is the example in the docs
resource "aws_transfer_server" "example" {
endpoint_type = "VPC"
endpoint_details {
subnet_ids = [aws_subnet.example.id]
vpc_id = aws_vpc.example.id
}
protocols = ["FTP", "FTPS"]
certificate = aws_acm_certificate.example.arn
identity_provider_type = "API_GATEWAY"
url = "${aws_api_gateway_deployment.example.invoke_url}${aws_api_gateway_resource.example.path}"
}
And here is my code
resource "aws_transfer_server" "transfer_x3" {
tags = {
Name = "${var.app}-${var.env}-transfer-x3-server"
}
endpoint_type = "VPC"
endpoint_details {
vpc_id = data.aws_vpc.vpc_global.id
subnet_ids = [data.aws_subnet.vpc_subnet_pri_commande_a.id, data.aws_subnet.vpc_subnet_pri_commande_b.id]
}
protocols = ["FTP", "FTPS"]
certificate = var.certificate_arn
identity_provider_type = "API_GATEWAY"
url = "https://${aws_api_gateway_rest_api.Api.id}.execute-api.${var.region}.amazonaws.com/latest/servers/{serverId}/users/{username}/config"
invocation_role = data.aws_iam_role.terraform-commande.arn
}
And here is the error message
╷
│ Error: error creating Transfer Server: InvalidRequestException: Bad value in IdentityProviderDetails
│
│ with aws_transfer_server.transfer_x3,
│ on transfer-x3.tf line 1, in resource "aws_transfer_server" "transfer_x3":
│ 1: resource "aws_transfer_server" "transfer_x3" {
│
╵
My guess is, it doesn't like the value in the url parameter
I have tried using the same form as one provided in the example: url = "${aws_api_gateway_deployment.ApiDeployment.invoke_url}${aws_api_gateway_resource.ApiResourceServerIdUserUsernameConfig.path}", but encountered the same error message
I have tried ordering the parameters around if it was that, but I had the same error over and over when I use the command terraform apply
The commands terraform validate and terraform plan didn't show the error message at all
What value could the url parameter need? Or is there a parameter missing in my resource declaration?
As per the documentation (CloudFormation in this case) [1], the examples say the only thing needed is the invoke URL of the API Gateway:
.
.
.
"IdentityProviderDetails": {
"InvocationRole": "Invocation-Role-ARN",
"Url": "API_GATEWAY-Invocation-URL"
},
"IdentityProviderType": "API_GATEWAY",
.
.
.
Comparing that to the attributes provided by the API Gateway stage resource in terraform, the only thing that is needed is the invoke_url attribute [2].
[1] https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-transfer-server.html#aws-resource-transfer-server--examples
[2] https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/api_gateway_stage#invoke_url

Pass gcp service account json file

We are creating confluent kafka sink connector (https://registry.terraform.io/providers/confluentinc/confluent/latest/docs/resources/confluent_connector) using terraform.
resource "confluent_connector" "gcs-sink" {
for_each = { for topic in var.topics : "${topic.name} ${topic.tasks}" => topic }
environment {
id = var.env_id
}
kafka_cluster {
id = var.cluster_id
}
config_nonsensitive = {
"name" = "${each.value.name}-gcs-connector"
"connector.class" = "GcsSink"
"topics" = "${each.value.name}"
"kafka.auth.mode" = "SERVICE_ACCOUNT"
"kafka.service.account.id" = "${var.connector_sa}"
"gcs.bucket.name" = "${var.gcs_bucket_name}"
"input.data.format" = "AVRO"
"output.data.format" = "AVRO"
"time.interval" = "HOURLY"
"flush.size" = "1000"
"tasks.max" = "${each.value.tasks}"
"topics.dir" = "avro-hourly"
"path.format" = "'process_date'=YYYY-MM-dd/'hour'=HH"
"rotate.schedule.interval.ms" = "60000"
"gcs.credentials.config" = var.gcs_sa_json
}
}
We need to pass service json key file to gcs.credentials.config, so i'm placing the json file in gcp secret manager and reading it from secret manger on runtime and storing it in variable gcs_sa_json, but I'm running into the below issue
Error: error waiting for Connector "g-gg-prod-gcs-connector" to provision: connector "display_name"="g-gg-prod-gcs-connector" provisioning status is "FAILED": Unable to validate configuration. If an update was made to the configuration, this means that the configuration was invalid, and the connector continues to operate on a previous configuration that passed validation. Errors:
│ gcs.credentials.config: Unable to retrieve credentials
│ gcs.bucket.name: Unable to retrieve credentials
│ . You might need to remove Connector manually before retrying.
│
│ with module.sink_connector.confluent_connector.gcs-sink["g-gg-prod-topic 2"],
│ on ../../terraform-modules/confluent-kafka/sink-connector/main.tf line 1, in resource "confluent_connector" "gcs-sink":
│ 1: resource "confluent_connector" "gcs-sink" {
I used the jsondecode function as well, but in vain.

terraform resource aws_ssm_association is throwing error

I have successfully applied the following terraform code.
resource "aws_ssm_association" "webssmassoc" {
name = "arn:aws:ssm:eu-west-1:*********:document/a4s-bl-automation"
association_name = "${var.service_name}-dt-webserver-association"
parameters = {
AssumeRole = aws_iam_role.dt_automation_role.arn
InstanceId = data.aws_instance.webinstance.id
}
apply_only_at_cron_interval = true
schedule_expression = "cron(0 14 ? * ${local.dayOfWeek} *)"
}
I now make a small change in the schedule expression and run a terraform plan, terraform detects the change properly.
# aws_ssm_association.webssmassoc will be updated in-place
~ resource "aws_ssm_association" "webssmassoc" {
id = "0b9ee1a4-6011-4a9c-9055-2cca172b061e"
name = "arn:aws:ssm:eu-west-1:497882509041:document/a4s-oneagent-reboot-automation"
~ schedule_expression = "cron(0 14 ? * TUE *)" -> "cron(0 15 ? * TUE *)"
# (6 unchanged attributes hidden)
# (1 unchanged block hidden)
}
When I run terraform apply, it errors out.
│ Error: Error updating SSM association: ValidationException: Must specify both Automation Target Parameter Name and Targets
│ status code: 400, request id: e4e80ff6-1235-4355-9221-2031a1fb922d
│
│ with aws_ssm_association.webssmassoc,
│ on main.tf line 118, in resource "aws_ssm_association" "webssmassoc":
│ 118: resource "aws_ssm_association" "webssmassoc" {
Extra information:
The document being associated is an automation document.
provider being used is hashicorp/aws v4.33.0
terraform v1.1.5
Is this a terraform bug? Or is it working the way it is intended to?
Thanks in advance.

Unable to assign LF-tags to lake formation database using Terraform

I prepared the following terraform scripts to assign an LF-tag to a database in lake formation.
resource "aws_lakeformation_resource_lf_tags" "gm_access" {
count = length(var.db_config)
database {
name = "gm_${var.db_config[count.index].name}_${terraform.workspace}"
}
lf_tag {
key = "access"
value = var.db_config[count.index].access
}
}
The LF Tag access has already been created in AWS manually (historically) with values defined.
I received errors:
│ Error: creating AWS Lake Formation Resource LF Tags (): attempted to add 1 tags, 1 failures
│
│ with aws_lakeformation_resource_lf_tags.gm_access[0],
│ on self_serve.tf line 72, in resource "aws_lakeformation_resource_lf_tags" "gm_access":
│ 72: resource "aws_lakeformation_resource_lf_tags" "gm_access" {
│
Any advice, please?

Error retrieving IAM policy GCP when terraform plan?

Hello one of my modules for terraform bootstrap for GCP contains
resource "google_organization_iam_member" "organizationAdmin" {
for_each = toset(var.users)
org_id = var.organization_id
role = "roles/resourcemanager.organizationAdmin"
member = each.value
}
right now I'm getting
Error: Error retrieving IAM policy for organization "903021035085 ": googleapi: Error 400: Request contains an invalid argument., badRequest
│
│ with module.bootstrap_permissions.google_organization_iam_member.organizationAdmin["group:gcp-organization-admins#juliusoh.tech"],
│ on ../modules/bootstrap_permissions/main.tf line 1, in resource "google_organization_iam_member" "organizationAdmin":
│ 1: resource "google_organization_iam_member" "organizationAdmin" {
The account making the request has Owner permission at the Organization level, is there a reason why I am getting an error, when I do terraform plan.
The value of var.organization_id has a trailing space (see the error message), e.g., "123 " instead of "123". Remove this space and it should work.