I'm trying to create a BigQuery Transfer from S3 bucket to BigQuery dataset using Terraform. Despite consulting the docs and looking through terraform debug logs I cannot fix the Error 400: Request contains an invalid argument. error.
I attach the configuration used to create S3 transfer as well as configuration of scheduled_query transfer that is created without any problems.
data "google_project" "project" {
}
resource "google_service_account" "bigquery_transfer_account" {
account_id = "bigquery-transfer-account"
display_name = "bigquery-transfer-account"
}
resource "google_project_iam_member" "permissions" {
project = data.google_project.project.project_id
role = "roles/iam.serviceAccountTokenCreator"
member = google_service_account.bigquery_transfer_account.member # "serviceAccount:service-${data.google_project.project.number}#gcp-sa-bigquerydatatransfer.iam.gserviceaccount.com"
}
resource "google_project_iam_member" "transfer-permissions" {
project = data.google_project.project.project_id
role = "roles/iam.serviceAccountShortTermTokenMinter"
member = "serviceAccount:service-${data.google_project.project.number}#gcp-sa-bigquerydatatransfer.iam.gserviceaccount.com"
}
resource "google_bigquery_data_transfer_config" "query_config_test" {
depends_on = [google_project_iam_member.permissions, google_project_iam_member.transfer-permissions]
display_name = "my-query"
data_source_id = "scheduled_query"
disabled = true
location = "EU"
destination_dataset_id = google_bigquery_dataset.transfer.dataset_id
params = {
destination_table_name_template = "my_table"
write_disposition = "WRITE_APPEND"
query = "SELECT 1 AS a"
}
service_account_name = google_service_account.bigquery_transfer_account.email
# service_account_name = "service-${data.google_project.project.number}#gcp-sa-bigquerydatatransfer.iam.gserviceaccount.com"
}
resource "google_bigquery_data_transfer_config" "amazon-transfer" {
depends_on = [google_project_iam_member.permissions, google_project_iam_member.transfer-permissions]
display_name = "transfer-s3-to-bq"
location = "EU"
disabled = true
data_source_id = "amazon_s3"
destination_dataset_id = google_bigquery_dataset.transfer.dataset_id
params = {
access_key_id = aws_iam_access_key.bigquery_transfer.id
destination_table_name_template = "my_table"
data_path = "s3://bq-source/*.csv"
file_format = "CSV"
skip_leading_rows = 1
write_disposition = "WRITE_APPEND"
}
sensitive_params {
secret_access_key = aws_iam_access_key.bigquery_transfer.secret
}
service_account_name = google_service_account.bigquery_transfer_account.email
# service_account_name = "service-${data.google_project.project.number}#gcp-sa-bigquerydatatransfer.iam.gserviceaccount.com"
}
terraform apply produces the following output:
...
# module.bigquery_dump.google_bigquery_data_transfer_config.amazon-transfer will be created
+ resource "google_bigquery_data_transfer_config" "amazon-transfer" {
+ data_source_id = "amazon_s3"
+ destination_dataset_id = "transfer"
+ disabled = true
+ display_name = "transfer-s3-to-bq"
+ id = (known after apply)
+ location = "EU"
+ name = (known after apply)
+ params = {
+ "access_key_id" = "xxxxxxxxxxxx"
+ "data_path" = "s3://bq-source/*.csv"
+ "destination_table_name_template" = "raw_events"
+ "file_format" = "CSV"
+ "skip_leading_rows" = "1"
+ "write_disposition" = "WRITE_APPEND"
}
+ project = (known after apply)
+ service_account_name = "bigquery-transfer-account#xxxxx.iam.gserviceaccount.com"
+ sensitive_params {
+ secret_access_key = (sensitive value)
}
}
Plan: 1 to add, 1 to change, 0 to destroy.
module.bigquery_dump.google_bigquery_data_transfer_config.amazon-transfer: Creating...
module.bigquery_dump.aws_security_group.bigquery_dump: Modifying... [id=sg-0c0d14bc1db66f430]
module.bigquery_dump.aws_security_group.bigquery_dump: Modifications complete after 1s [id=sg-0c0d14bc1db66f430]
╷
│ Error: Error creating Config: googleapi: Error 400: Request contains an invalid argument.
│
│ with module.bigquery_dump.google_bigquery_data_transfer_config.amazon-transfer,
│ on ../../modules/aurora_to_bigquery_transfer/bigquery.tf line 77, in resource "google_bigquery_data_transfer_config" "amazon-transfer":
│ 77: resource "google_bigquery_data_transfer_config" "amazon-transfer" {
│
╵
I've tried to use BQ's service account (as you can see it is commented out, but changing it doesn't fix the problem), successful creation of scheduled transfer seems to confirm that I have correct permissions and my account is set up correctly.
I've consulted the S3 transfer documentation to see if I've provided every required argument. I've enabled terraform debug log to see the actual API Calls, there is no additional info in the log (redacted log attached)
-----------------------------------------------------: timestamp=2023-02-06T13:02:40.149+0100
2023-02-06T13:02:40.149+0100 [INFO] provider.terraform-provider-google_v4.51.0_x5: 2023/02/06 13:02:40 [DEBUG] Creating new Config: map[string]interface {}{"dataSourceId":"amazon_s3", "destinationDatasetId":"transfer", "displayName":"update_order", "params":map[string]string{"access_key_id":"xxxx", "data_path":"s3://xxxx/*", "destination_table_name_template":"xxx", "file_format":"CSV", "secret_access_key":"xxx", "write_disposition":"WRITE_TRUNCATE"}}: timestamp=2023-02-06T13:02:40.149+0100
2023-02-06T13:02:40.149+0100 [INFO] provider.terraform-provider-google_v4.51.0_x5: 2023/02/06 13:02:40 [DEBUG] Waiting for state to become: [success]: timestamp=2023-02-06T13:02:40.149+0100
2023-02-06T13:02:40.149+0100 [INFO] provider.terraform-provider-google_v4.51.0_x5: 2023/02/06 13:02:40 [DEBUG] Retry Transport: starting RoundTrip retry loop: timestamp=2023-02-06T13:02:40.149+0100
2023-02-06T13:02:40.149+0100 [INFO] provider.terraform-provider-google_v4.51.0_x5: 2023/02/06 13:02:40 [DEBUG] Retry Transport: request attempt 0: timestamp=2023-02-06T13:02:40.149+0100
2023-02-06T13:02:40.149+0100 [INFO] provider.terraform-provider-google_v4.51.0_x5: 2023/02/06 13:02:40 [DEBUG] Creating new Config: map[string]interface {}{"dataSourceId":"amazon_s3", "destinationDatasetId":"transfer", "displayName":"update_user_action", "params":map[string]string{"access_key_id":"xxxx", "data_path":"s3://xxx/*", "destination_table_name_template":"xxx", "file_format":"CSV", "secret_access_key":"xxx", "write_disposition":"WRITE_TRUNCATE"}}: timestamp=2023-02-06T13:02:40.149+0100
2023-02-06T13:02:40.149+0100 [INFO] provider.terraform-provider-google_v4.51.0_x5: 2023/02/06 13:02:40 [DEBUG] Waiting for state to become: [success]: timestamp=2023-02-06T13:02:40.149+0100
2023-02-06T13:02:40.149+0100 [INFO] provider.terraform-provider-google_v4.51.0_x5: 2023/02/06 13:02:40 [DEBUG] Google API Request Details:
---[ REQUEST ]---------------------------------------
POST /v1/projects/xxxxx/locations/EU/transferConfigs?alt=json&serviceAccountName=xxx#gcp-sa-bigquerydatatransfer.iam.gserviceaccount.com HTTP/1.1
Host: bigquerydatatransfer.googleapis.com
User-Agent: Terraform/1.3.6 (+https://www.terraform.io) Terraform-Plugin-SDK/2.10.1 terraform-provider-google/4.51.0
Content-Length: 423
Content-Type: application/json
Accept-Encoding: gzip
{
"dataSourceId": "amazon_s3",
"destinationDatasetId": "transfer",
"displayName": "update_order",
"params": {
"access_key_id": "xxxx",
"data_path": "s3://xxx/*",
"destination_table_name_template": "xxx",
"file_format": "CSV",
"secret_access_key": "xxx",
"write_disposition": "WRITE_TRUNCATE"
}
}
2023-02-06T13:02:41.461+0100 [INFO] provider.terraform-provider-google_v4.51.0_x5: 2023/02/06 13:02:41 [DEBUG] Google API Response Details:
---[ RESPONSE ]--------------------------------------
HTTP/2.0 400 Bad Request
Alt-Svc: h3=":443"; ma=2592000,h3-29=":443"; ma=2592000
Cache-Control: private
Content-Type: application/json; charset=UTF-8
Date: Mon, 06 Feb 2023 12:02:41 GMT
Server: ESF
Vary: Origin
Vary: X-Origin
Vary: Referer
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-Xss-Protection: 0
{
"error": {
"code": 400,
"message": "Request contains an invalid argument.",
"status": "INVALID_ARGUMENT"
}
}
Related
I've configured the following certificate using aws_acm_ceritifcate resource:
provider "aws" {
alias = "virginia"
region = "us-east-1"
}
resource "aws_acm_certificate" "primary" {
domain_name = var.domain_name
validation_method = "DNS"
subject_alternative_names = ["*.${var.domain_name}"]
provider = aws.virginia
lifecycle {
create_before_destroy = true
}
tags = merge(
var.tags,
{
Name = "${var.project}-ACM-certificate",
}
)
}
resource "aws_route53_record" "certificate_validator_record" {
allow_overwrite = true
name = tolist(aws_acm_certificate.primary.domain_validation_options)[0].resource_record_name
records = [tolist(aws_acm_certificate.primary.domain_validation_options)[0].resource_record_value]
type = tolist(aws_acm_certificate.primary.domain_validation_options)[0].resource_record_type
zone_id = aws_route53_zone.primary.zone_id
ttl = 60
}
resource "aws_acm_certificate_validation" "certificate_validator" {
certificate_arn = aws_acm_certificate.primary.arn
validation_record_fqdns = [aws_route53_record.certificate_validator_record.fqdn]
}
As you can see, I need the certificate to validate the configured domain and its sub-domains. I configured Cloudfront:
module "cdn" {
source = "terraform-aws-modules/cloudfront/aws"
comment = "CloudFront for caching S3 private and static website"
is_ipv6_enabled = true
price_class = "PriceClass_100"
create_origin_access_identity = true
aliases = [var.frontend_domain_name]
origin_access_identities = {
s3_identity = "S3 dedicated for hosting the frontend"
}
origin = {
s3_identity = {
domain_name = module.s3_bucket.s3_bucket_bucket_regional_domain_name
s3_origin_config = {
origin_access_identity = "s3_identity"
}
}
}
default_cache_behavior = {
target_origin_id = "s3_identity"
viewer_protocol_policy = "redirect-to-https"
default_ttl = 5400
min_ttl = 3600
max_ttl = 7200
allowed_methods = ["GET", "HEAD"]
cached_methods = ["GET", "HEAD"]
compress = true
query_string = true
}
default_root_object = "index.html"
custom_error_response = [
{
error_code = 403
response_code = 404
response_page_path = "/index.html"
},
{
error_code = 404
response_code = 404
response_page_path = "/index.html"
}
]
viewer_certificate = {
acm_certificate_arn = aws_acm_certificate.primary.arn
ssl_support_method = "sni-only"
}
tags = merge(
var.tags,
{
Name = "${var.project}-Cloudfront",
Stack = "frontend"
}
)
}
But when I try to create this terraform plan I get this error:
module.cdn.aws_cloudfront_distribution.this[0]: Still creating... [1m0s elapsed]
╷
│ Error: reading ACM Certificate (arn:aws:acm:us-east-1:***:certificate/ARN_PLACEHOLDER): couldn't find resource
│
│ with aws_acm_certificate_validation.certificate_validator,
│ on acm.tf line 33, in resource "aws_acm_certificate_validation" "certificate_validator":
│ 33: resource "aws_acm_certificate_validation" "certificate_validator" {
│
╵
╷
│ Error: error creating CloudFront Distribution: InvalidViewerCertificate: The certificate that is attached to your distribution doesn't cover the alternate domain name (CNAME) that you're trying to add. For more details, see: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/CNAMEs.html#alternate-domain-names-requirements
│ status code: 400, request id: blabla
│
│ with module.cdn.aws_cloudfront_distribution.this[0],
│ on .terraform/modules/cdn/main.tf line 15, in resource "aws_cloudfront_distribution" "this":
│ 15: resource "aws_cloudfront_distribution" "this" {
│
╵
Releasing state lock. This may take a few moments...
If I go to my AWS account and check the certificate:
So if the certificate is valid and placed in us-east-1, where am I wrong?
I solved the issue with:
resource "aws_acm_certificate_validation" "certificate_validator" {
provider = aws.virginia
certificate_arn = aws_acm_certificate.primary.arn
validation_record_fqdns = [aws_route53_record.certificate_validator_record.fqdn]
}
Problem was that my cert validation was configured in my default region rather than us-east-1 region (as my certificate)
I'm using Terraform 1.3.5 and this module previously worked flawlessly, until I renamed the module. Now I am getting this error:
Error: creating EventBridge Target (cleanup-terraform-20221130175229684800000001): ValidationException: RoleArn is required for target arn:aws:events:us-east-1:123456789012:api-destination/services-destination/c187090f-268b-4d9b-b09d-f9b077e0c0cf.
│ status code: 400, request id: 63dc6425-2a94-4f66-b7c2-106b0607d964
│
│ with module.a-eventbridge-trigger.aws_cloudwatch_event_target.api_destination,
│ on ..\a-eventbridge-trigger\main.tf line 61, in resource "aws_cloudwatch_event_target" "api_destination":
│ 61: resource "aws_cloudwatch_event_target" "api_destination" {
Here is the complete content of the main.tf in the module:
# configures api connection
resource "aws_cloudwatch_event_connection" "auth" {
name = "services-token"
description = "Gets oauth bearer token"
authorization_type = "OAUTH_CLIENT_CREDENTIALS"
auth_parameters {
oauth {
authorization_endpoint = "${var.vars.apiBaseUrl}${var.vars.auth}"
http_method = "POST"
client_parameters {
client_id = var.secretContent.Client_Id
client_secret = var.secretContent.Client_Secret
}
oauth_http_parameters {
body {
key = "grant_type"
value = "client_credentials"
is_value_secret = true
}
body {
key = "client_id"
value = var.secretContent.Client_Id
is_value_secret = true
}
body {
key = "client_secret"
value = var.secretContent.Client_Secret
is_value_secret = true
}
}
}
}
}
# configures api destination
resource "aws_cloudwatch_event_api_destination" "request" {
name = "services-destination"
description = "Requests clean up"
invocation_endpoint = "${var.vars.apiBaseUrl}${var.vars.endpoint}"
http_method = "POST"
invocation_rate_limit_per_second = 20
connection_arn = aws_cloudwatch_event_connection.auth.arn
}
# sets up the scheduling
resource "aws_cloudwatch_event_rule" "every_midnight" {
name = "${var.name}-services-cleanup"
description = "Fires on every day at midnight of UTC+0"
schedule_expression = "cron(0 0 * * ? *)"
is_enabled = true
}
# tells the scheduler to call the api destination
resource "aws_cloudwatch_event_target" "api_destination" {
rule = aws_cloudwatch_event_rule.every_midnight.name
arn = aws_cloudwatch_event_api_destination.request.arn
}
And the module is called like this from the root module:
module "a-eventbridge-trigger" {
source = "../a-eventbridge-trigger"
name = local.prefixName
resourceTags = local.commonTags
vars = var.vars
secretContent = var.secrets
}
Here is the providers.tf:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "4.43.0"
}
}
backend "s3" {}
}
What am I missing and why would it stop working suddenly?
I have run a complete destroy and fresh apply but I still get this.
getting the below error with my terraform config.
Error: Post "https://35.224.178.141/api/v1/namespaces": x509: certificate signed by unknown authority
on main.tf line 66, in resource "kubernetes_namespace" "example":
66: resource "kubernetes_namespace" "example" {
Here is my config, all I want to do for now is create a cluster auth with it, and create a namespace.
I have searched everyone and cant see where anyone else has run into this problem.
It is most likely something stupid I am doing. I thought this would be relatively simple, but its turning out to be a pain. I dont want to have to wrap gcloud commands in my build script.
provider "google" {
project = var.project
region = var.region
zone = var.zone
credentials = "google-key.json"
}
terraform {
backend "gcs" {
bucket = "tf-state-bucket-devenv"
prefix = "terraform"
credentials = "google-key.json"
}
}
resource "google_container_cluster" "my_cluster" {
name = var.kube-clustername
location = var.zone
remove_default_node_pool = true
initial_node_count = 1
master_auth {
username = ""
password = ""
client_certificate_config {
issue_client_certificate = false
}
}
}
resource "google_container_node_pool" "primary_preemptible_nodes" {
name = var.kube-poolname
location = var.zone
cluster = google_container_cluster.my_cluster.name
node_count = var.kube-nodecount
node_config {
preemptible = var.kube-preemptible
machine_type = "n1-standard-1"
disk_size_gb = 10
disk_type = "pd-standard"
metadata = {
disable-legacy-endpoints = "true",
}
oauth_scopes = [
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
]
}
}
data "google_client_config" "provider" {}
provider "kubernetes" {
load_config_file = false
host = "https://${google_container_cluster.my_cluster.endpoint}"
cluster_ca_certificate = "{base64decode(google_container_cluster.my_cluster.master_auth.0.cluster_ca_certificate)}"
token = "{data.google_client_config.provider.access_token}"
}
resource "kubernetes_namespace" "example" {
metadata {
name = "my-first-namespace"
}
}
TL;DR
Change the provider definition to:
provider "kubernetes" {
load_config_file = false
host = "https://${google_container_cluster.my_cluster.endpoint}"
cluster_ca_certificate = base64decode(google_container_cluster.my_cluster.master_auth.0.cluster_ca_certificate)
token = data.google_client_config.provider.access_token
}
What changed?
The "{}" was deleted from the cluster_ca_certificate and token values
I included the explanation below.
I used your original terraform file and I received the same error as you. I modified (simplified) your terraform file and added the output definitions:
resource "google_container_cluster" "my_cluster" {
OMMITED
}
data "google_client_config" "provider" {}
provider "kubernetes" {
load_config_file = false
host = "https://${google_container_cluster.my_cluster.endpoint}"
cluster_ca_certificate = "{base64decode(google_container_cluster.my_cluster.master_auth.0.cluster_ca_certificate)}"
token = "{data.google_client_config.provider.access_token}"
}
output "cert" {
value = "{base64decode(google_container_cluster.my_cluster.master_auth.0.cluster_ca_certificate)}"
}
output "token" {
value = "{data.google_client_config.provider.access_token}"
}
Running above file showed:
$ terraform apply --auto-approve
data.google_client_config.provider: Refreshing state...
google_container_cluster.my_cluster: Creating...
google_container_cluster.my_cluster: Creation complete after 2m48s [id=projects/PROJECT-NAME/locations/europe-west3-c/clusters/gke-terraform]
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
Outputs:
cert = {base64decode(google_container_cluster.my_cluster.master_auth.0.cluster_ca_certificate)}
token = {data.google_client_config.provider.access_token}
As you can see the values were interpreted as strings from the provider and not "processed" to get the needed values. To fix that you will need change the provider definition to:
cluster_ca_certificate = base64decode(google_container_cluster.my_cluster.master_auth.0.cluster_ca_certificate)
token = data.google_client_config.provider.access_token
Running $ terraform apply --auto-approve once again:
data.google_client_config.provider: Refreshing state...
google_container_cluster.my_cluster: Creation complete after 3m18s [id=projects/PROJECT-NAME/locations/europe-west3-c/clusters/gke-terraform]
kubernetes_namespace.example: Creating...
kubernetes_namespace.example: Creation complete after 0s [id=my-first-namespace]
Apply complete! Resources: 2 added, 0 changed, 0 destroyed.
Outputs:
cert = -----BEGIN CERTIFICATE-----
MIIDKzCCAhOgAwIBAgIRAO2bnO3FU6HZ0T2u3XBN1jgwDQYJKoZIhvcNAQELBQAw
<--OMMITED-->
a9Ybow5tZGu+fqvFHnuCg/v7tln/C3nVuTbwa4StSzujMsPxFv4ONVl4F4UaGw0=
-----END CERTIFICATE-----
token = ya29.a0AfH6SMBx<--OMMITED-->fUvCeFg
As you can see the namespace was created. You can check it by running:
$ gcloud container clusters get-credentials CLUSTER-NAME --zone=ZONE
$ kubectl get namespace my-first-namespace
Output:
NAME STATUS AGE
my-first-namespace Active 3m14s
Additional resources:
Terraform.io: Docs: Configuration: Outputs
Terraform.io: Docs: Configuration: Variables
I want to deploy Cloud Function by Terraform but it fails.
export TF_LOG=DEBUG
terraform init
terraform plan # it does not fail
terraform apply # this fail
{
"error": {
"code": 400,
"message": "The request has errors",
"errors": [
{
"message": "The request has errors",
"domain": "global",
"reason": "badRequest"
}
],
"status": "INVALID_ARGUMENT"
}
}
What I tired
I tried to change the trigger to HTTP but the deployment also failed.
enable TF_LOG
do terraform plan but it succeeded
terraform template
below is my main.tf file
resource "google_pubsub_topic" "topic" {
name = "rss-webhook-topic"
project = "${var.project_id}"
}
resource "google_cloudfunctions_function" "function" {
name = "rss-webhook-function"
entry_point = "helloGET"
available_memory_mb = 256
project = "${var.project_id}"
event_trigger {
event_type = "google.pubsub.topic.publish"
resource = "${google_pubsub_topic.topic.name}"
}
source_archive_bucket = "${var.bucket_name}"
source_archive_object = "${google_storage_bucket_object.archive.name}"
}
data "archive_file" "function_src" {
type = "zip"
output_path = "function_src.zip"
source {
content = "${file("src/index.js")}"
filename = "index.js"
}
}
resource "google_storage_bucket_object" "archive" {
name = "function_src.zip"
bucket = "${var.bucket_name}"
source = "function_src.zip"
depends_on = ["data.archive_file.function_src"]
}
environment
Terraform version: 0.11.13
Go runtime version: go1.12
+ provider.archive v1.2.2
+ provider.google v2.5.1
property "runtime" is required.
below works.
resource "google_cloudfunctions_function" "function" {
name = "rss-webhook-function"
entry_point = "helloGET"
available_memory_mb = 256
project = "${var.project_id}"
runtime = "nodejs8"
event_trigger {
event_type = "google.pubsub.topic.publish"
resource = "${google_pubsub_topic.topic.name}"
}
source_archive_bucket = "${var.bucket_name}"
source_archive_object = "${google_storage_bucket_object.archive.name}"
}
while building with terraform plan i am getting the error. This is my .tf file:
provider "vsphere" {
user = "aaa"
password = "aaa"
vsphere_server = "172.22.1.139"
allow_unverified_ssl = "true"
}
resource "vsphere_folder" "frontend" {
path = "VirtualMachines"
datacenter = "A2MS0110-VMFS5"
}
resource "vsphere_virtual_machine" "FIRST_VM" {
name = "FIRST_VM"
vcpu = 1
memory = 2048
datacenter = "A2MS0110-VMFS5"
network_interface {
label = "VM Network"
}
disk {
datastore = "A2MS0110-VMFS5"
vmdk = "/demo2"
}
}
I am getting the following error:
* provider.vsphere: Error setting up client: Post https://172.22.1.139/sdk: net/http: TLS handshake timeout
Is this network problem or configuration error?