I have created a query that shows the expected behavior when executed manually. When I try to create a scheduled query from it, I always get the error BigQuery error in query operation: Request contains an invalid argument. It is 1:1 the same query except that I want to schedule it.
Both the Web Platform and the CLI display the same error BigQuery error in query operation: Request contains an invalid argument.. Even --apilog=stdout returns nothing meaningful to me.
We use a similar scheduled query in another table - the only difference is that the target and origin location in the new query is --location=europe-west3 instead of us.
Command:
bq query \
--append \
--display_name=XXX \
--schedule='every 24 hours' \
--destination_table=XXX \
--use_legacy_sql=false \
'[...]'
Stdout:
INFO:googleapiclient.model:--request-start--
INFO:googleapiclient.model:-headers-start-
INFO:googleapiclient.model:content-type: application/json
INFO:googleapiclient.model:accept-encoding: gzip, deflate
INFO:googleapiclient.model:accept: application/json
INFO:googleapiclient.model:user-agent: google-api-python-client/1.7.10 (gzip)
INFO:googleapiclient.model:-headers-end-
INFO:googleapiclient.model:-path-parameters-start-
INFO:googleapiclient.model:parent: projects/XXX/locations/-
INFO:googleapiclient.model:-path-parameters-end-
INFO:googleapiclient.model:body: {"destinationDatasetId": "III", "displayName": "scheduledQueryName", "schedule": "every 24 hours", "scheduleOptions": {"disableAutoScheduling": false}, "dataSourceId": "scheduled_query", "params": {"query": "[.........]", "write_disposition": "", "destination_table_name_template": "[.........]", "partitioning_field": ""}}
INFO:googleapiclient.model:query: ?authorizationCode=&alt=json
INFO:googleapiclient.model:--request-end--
INFO:googleapiclient.discovery:URL being requested: POST https://bigquerydatatransfer.googleapis.com/v1/projects/XXX/locations/-/transferConfigs?authorizationCode=&alt=json
INFO:googleapiclient.model:--response-start--
INFO:googleapiclient.model:status: 400
INFO:googleapiclient.model:content-length: 285
INFO:googleapiclient.model:x-xss-protection: 0
INFO:googleapiclient.model:x-content-type-options: nosniff
INFO:googleapiclient.model:transfer-encoding: chunked
INFO:googleapiclient.model:vary: Origin, X-Origin, Referer
INFO:googleapiclient.model:server: ESF
INFO:googleapiclient.model:-content-encoding: gzip
INFO:googleapiclient.model:cache-control: private
INFO:googleapiclient.model:date: Tue, 19 Nov 2019 14:06:45 GMT
INFO:googleapiclient.model:x-frame-options: SAMEORIGIN
INFO:googleapiclient.model:alt-svc: quic=":443"; ma=2592000; v="46,43",h3-Q050=":443"; ma=2592000,h3-Q049=":443"; ma=2592000,h3-Q048=":443"; ma=2592000,h3-Q046=":443"; ma=2592000,h3-Q043=":443"; ma=2592000
INFO:googleapiclient.model:content-type: application/json; charset=UTF-8
INFO:googleapiclient.model:{
"error": {
"code": 400,
"message": "Request contains an invalid argument.",
"errors": [
{
"message": "Request contains an invalid argument.",
"domain": "global",
"reason": "badRequest"
}
],
"status": "INVALID_ARGUMENT"
}
}
INFO:googleapiclient.model:--response-end--
BigQuery error in query operation: Request contains an invalid argument.
Any clue what BigQuery error in query operation: Request contains an invalid argument. can cause?
BigQuery Data Transfer Service does not yet support location europe-west3.
Please select a dataset in a supported location.
You have to change the location to EU
Related
I am trying to create destination for the SP API notification. I have already changed to permission policy my AWS queue to grant create messages and read message attributes permissions.
I am using STS credentials to sign the request in postman. This request is supposed to be a grantless operation, so ideally it shouldn't ask for access token. Please help me understand what could I possibly be doing wrong.
Request in Postman:
POST /notifications/v1/destinations HTTP/1.1
Host: sellingpartnerapi-eu.amazon.com
X-Amz-Content-Sha256: beaead3198f7da1e70d03ab969765e0821b24fc913697e929e726aeaebf0eba3
X-Amz-Security-Token: FwoGZXIvYXdzEHMaDI8z8g0xqn42DSi0ISKoAXEp97wFc6YYdaSZ9txcAswRRsRjZ32d++T4APe/rLIL1rDfq9A2c2KYuLsF8+9F/N7brZarJQymqFnQ57JcGugxK6Npg5o/UQjNhvnI0EUAIqTptb/bXLXnmz7I2K2lhGKgV7PEkqAQlX/iYGI5RoNN0wK1QE3IY3T1miyRLF40PGNHt16WQaZPTXsMfG6OvaFuMa/ijchvnQ+3KP9Hs62vVZoxeC0G3ii7rtyYBjItb1Ltu7wcpzAXRO6W/BZWWqNN28V2ZS+e0qiYryYtgdnv0Ov9KBDBJFWKplxu
X-Amz-Date: 20220906T100237Z
Authorization: AWS4-HMAC-SHA256 Credential=ASIA4RJ32PS7YHU6JTGP/20220906/eu-west-1/execute-api/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date;x-amz-security-token, Signature=2c0c3727088ffa984f181c38c89afe305840cc0058cada48480c3103f5c544fa
Content-Type: application/json
Content-Length: 170
{
"name": "SaralDestination",
"resourceSpecification":
{
"sqs":
{
"arn": "arn:aws:sqs:eu-west-1:861803281599:SPNotificationQueue"
}
}
}
Response:
{
"errors": [
{
"message": "Access to requested resource is denied.",
"code": "Unauthorized",
"details": "Access token is missing in the request header."
}
]
}
Gotta do a post request to: https://api.amazon.com/auth/o2/token
And then include in the headers as
x-amz-access-token the access_token (starting with Atza)
I decided to automate the creation of GC projects using Terraform.
One resource that Terraform will create during the run, is a new GSuite user. This is done using the terraform-provider-gsuite. So I set all up (service account, domain-wide delegation, etc) and all works fine when I run the Terraform steps from my command line.
Next, instead of relying on my command line, I decided to have a Cloud Build trigger that would execute Terraform init-plan-apply. As you all know, Cloud builds run under the identity of the GCB Service Account. This means we need to give that SA the permissions that Terraform might need during the execution. So far so good.
So I run the build, and I see that the only resource that Terraform is not able to create is the GSuite user. Digging through the logs I found these 2 requests (and their responses):
GET /admin/directory/v1/users?alt=json&customer=my_customer&prettyPrint=false&query=email%3Alolloso-admin%40codedby.pm HTTP/1.1
Host: www.googleapis.com
User-Agent: google-api-go-client/0.5 (linux amd64) Terraform/0.14.7
X-Goog-Api-Client: gl-go/1.15.6 gdcl/20200514
Accept-Encoding: gzip
HTTP/2.0 400 Bad Request
Cache-Control: private
Content-Type: application/json; charset=UTF-8
Date: Sun, 28 Feb 2021 12:58:25 GMT
Server: ESF
Vary: Origin
Vary: X-Origin
Vary: Referer
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-Xss-Protection: 0
{
"error": {
"code": 400,
"message": "Invalid Input",
"errors": [
{
"domain": "global",
"reason": "invalid"
}
]
}
}
POST /admin/directory/v1/users?alt=json&prettyPrint=false HTTP/1.1
Host: www.googleapis.com
User-Agent: google-api-go-client/0.5 (linux amd64) Terraform/0.14.7
Content-Length: 276
Content-Type: application/json
X-Goog-Api-Client: gl-go/1.15.6 gdcl/20200514
Accept-Encoding: gzip
{
"changePasswordAtNextLogin": true,
"externalIds": [],
"includeInGlobalAddressList": true,
"name": {
"familyName": "********",
"givenName": "*******"
},
"orgUnitPath": "/",
"password": "********",
"primaryEmail": "*********",
"sshPublicKeys": []
}
HTTP/2.0 403 Forbidden
Cache-Control: private
Content-Type: application/json; charset=UTF-8
Date: Sun, 28 Feb 2021 12:58:25 GMT
Server: ESF
Vary: Origin
Vary: X-Origin
Vary: Referer
Www-Authenticate: Bearer realm="https://accounts.google.com/", error="insufficient_scope", scope="https://www.googleapis.com/auth/admin.directory.user https://www.googleapis.com/auth/directory.user"
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-Xss-Protection: 0
{
"error": {
"code": 403,
"message": "Request had insufficient authentication scopes.",
"errors": [
{
"message": "Insufficient Permission",
"domain": "global",
"reason": "insufficientPermissions"
}
],
"status": "PERMISSION_DENIED"
}
}
I think this is the API complaining that the Cloud Build Service Account does not have enough rights to access the Directory API. And here is where the situation gets wild.
In order to do so I thought to grant domain-wide delegation to the Cloud Build SA. But that SA is special and I could not find a way to grant it.
I tried then to give the role serviceAccountUser to the Cloud Build SA on my SA (the one which has domain-wide delegation). But I did not manage to succeed. In fact the build still trows the same error of insufficient permission.
I then tried to use my SA (with domain-wide delegatuion) as custom Cloud Build Service Account. Also there, no luck.
Is it even possible from a Cloud Build to access certain resources for which normally one would use domain-wide delegation?
Thanks
UPDATE 1 (using custom build service account)
As per John comment, I tried to use a user-specified service account to execute my build. The necessary setup info has been taken from the official guide.
This is my cloudbuild.yaml file
steps:
- id: 'tf init'
name: 'hashicorp/terraform'
entrypoint: 'sh'
args:
- '-c'
- |
terraform init
- id: 'tf plan'
name: 'hashicorp/terraform'
entrypoint: 'sh'
args:
- '-c'
- |
terraform plan
- id: 'tf apply'
name: 'hashicorp/terraform'
entrypoint: 'sh'
args:
- '-c'
- |
terraform apply -auto-approve
logsBucket: 'gs://tf-project-creator-cloudbuild-logs'
serviceAccount: 'projects/tf-project-creator/serviceAccounts/sa-terraform-project-creator#tf-project-creator.iam.gserviceaccount.com'
options:
env:
- 'TF_LOG=DEBUG'
where sa-terraform-project-creator#tf-project-creator.iam.gserviceaccount.com is the service account which has domain-wide delegation on my Google Workspace.
I then executed the build manually
export GOOGLE_APPLICATION_CREDENTIALS=.secrets/sa-terraform-project-creator.json; gcloud builds submit --config cloudbuild.yaml
specifying the json private key of the same SA of above.
I would have expected the build to pass but I still get the same error of above
POST /admin/directory/v1/users?alt=json&prettyPrint=false HTTP/1.1
Host: www.googleapis.com
User-Agent: google-api-go-client/0.5 (linux amd64) Terraform/0.14.7
Content-Length: 276
Content-Type: application/json
X-Goog-Api-Client: gl-go/1.15.6 gdcl/20200514
Accept-Encoding: gzip
{
"changePasswordAtNextLogin": true,
"externalIds": [],
"includeInGlobalAddressList": true,
"name": {
"familyName": "REDACTED",
"givenName": "REDACTED"
},
"orgUnitPath": "/",
"organizations": [],
"password": "REDACTED",
"primaryEmail": "REDACTED",
"sshPublicKeys": []
}
-----------------------------------------------------
2021/03/06 17:26:19 [DEBUG] Google API Response Details:
---[ RESPONSE ]--------------------------------------
HTTP/2.0 403 Forbidden
Cache-Control: private
Content-Type: application/json; charset=UTF-8
Date: Sat, 06 Mar 2021 17:26:19 GMT
Server: ESF
Vary: Origin
Vary: X-Origin
Vary: Referer
Www-Authenticate: Bearer realm="https://accounts.google.com/", www.googleapis.com/auth/directory.user"
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-Xss-Protection: 0
{
"error": {
"code": 403,
"message": "Request had insufficient authentication scopes.",
{
"message": "Insufficient Permission",
"domain": "global",
"reason": "insufficientPermissions"
}
],
"status": "PERMISSION_DENIED"
}
}
Is there anything I am missing?
UPDATE 2 (check on active identity when submitting a build)
As deviavir pointed out in their comment, I tried
enabling "Service Accounts" in the GCB settings, but as suspected it did not work.
double checking the active identity while submitting the build. One of the limitations of using a custom build SA, is that the build must be manually triggered. So using gcloud, that means
gcloud builds submit --config cloudbuild.yaml
Til now, when executing this command I have always prepended it by setting GOOGLE_APPLICATION_CREDENTIALS var like this
export GOOGLE_APPLICATION_CREDENTIALS=.secrets/sa-terraform-project-creator.json
The specified private key is the key to my build SA (the one with domain-wide delegation). While doing that, I was always logged in in gcloud with another account (the Owner of the project) which does not have the domain-wide delegation permission). But I thought that by setting GOOGLE_APPLICATION_CREDENTIALS, gcloud would have picked up that credentials. I still think that is the case, but I tried to then submit the build while being logged in gcloud using that same build SA.
So I did
gcloud auth activate-service-account sa-terraform-project-creator#tf-project-creator.iam.gserviceaccount.com --key-file='.secrets/sa-terraform-project-creator.json'
and right after
gcloud builds submit --config cloudbuild.yaml
Yet again, I hit the same permission problem when accessing the Directory API.
As deviavir suspected, I start to think that during the execution of the build, the call to the Directory API is done with the wrong credentials.
Is there a way to log the identity used while executing certain Terraform plugin API calls? That would help a lot.
When trying to add high availability on an existing Cloud SQL instance using:
gcloud sql instances patch $INSTANCE --project $PROJECT --availability-type regional
the process fails with this message
The following message will be used for the patch API method.
{"project": "$PROJECT", "name": "$INSTANCE", "settings": {"availabilityType": "REGIONAL", "databaseFlags": [{"name": "sql_mode", "value": "TRADITIONAL"}, {"name": "default_time_zone", "value": "+01:00"}]}}
ERROR: (gcloud.sql.instances.patch) HTTPError 400: The incoming request contained invalid data.
It also fails using the web interface.
Gcloud version Google Cloud SDK [280.0.0]
This is the output of the log (not much help that I can see):
2020-02-14 11:01:34,476 DEBUG root Loaded Command Group: [u'gcloud', u'sql', u'instances']
2020-02-14 11:01:34,510 DEBUG root Loaded Command Group: [u'gcloud', u'sql', u'instances', u'patch']
2020-02-14 11:01:34,517 DEBUG root Running [gcloud.sql.instances.patch] with arguments: [--availability-type: "regional", --project: "$PROJECT", INSTANCE: "$INSTANCE"]
2020-02-14 11:01:35,388 INFO ___FILE_ONLY___ The following message will be used for the patch API method.
2020-02-14 11:01:35,398 INFO ___FILE_ONLY___ {"project": "$PROJECT", "name": "$INSTANCE", "settings": {"availabilityType": "REGIONAL", "databaseFlags": [{"name": "sql_mode", "value": "TRADITIONAL"}, {"name": "default_time_zone", "value": "+01:00"}]}}
2020-02-14 11:01:35,865 DEBUG root (gcloud.sql.instances.patch) HTTPError 400: The incoming request contained invalid data.
Traceback (most recent call last):
File "C:\Users\udAL\AppData\Local\Google\Cloud SDK\google-cloud-sdk\lib\googlecloudsdk\calliope\cli.py", line 981, in Execute
resources = calliope_command.Run(cli=self, args=args)
File "C:\Users\udAL\AppData\Local\Google\Cloud SDK\google-cloud-sdk\lib\googlecloudsdk\calliope\backend.py", line 807, in Run
resources = command_instance.Run(args)
File "C:\Users\udAL\AppData\Local\Google\Cloud SDK\google-cloud-sdk\lib\surface\sql\instances\patch.py", line 306, in Run
return RunBasePatchCommand(args, self.ReleaseTrack())
File "C:\Users\udAL\AppData\Local\Google\Cloud SDK\google-cloud-sdk\lib\surface\sql\instances\patch.py", line 278, in RunBasePatchCommand
instance=instance_ref.instance))
File "C:\Users\udAL\AppData\Local\Google\Cloud SDK\google-cloud-sdk\lib\googlecloudsdk\third_party\apis\sql\v1beta4\sql_v1beta4_client.py", line 697, in Patch
config, request, global_params=global_params)
File "C:\Users\udAL\AppData\Local\Google\Cloud SDK\google-cloud-sdk\bin\..\lib\third_party\apitools\base\py\base_api.py", line 731, in _RunMethod
return self.ProcessHttpResponse(method_config, http_response, request)
File "C:\Users\udAL\AppData\Local\Google\Cloud SDK\google-cloud-sdk\bin\..\lib\third_party\apitools\base\py\base_api.py", line 737, in ProcessHttpResponse
self.__ProcessHttpResponse(method_config, http_response, request))
File "C:\Users\udAL\AppData\Local\Google\Cloud SDK\google-cloud-sdk\bin\..\lib\third_party\apitools\base\py\base_api.py", line 604, in __ProcessHttpResponse
http_response, method_config=method_config, request=request)
HttpBadRequestError: HttpError accessing <https://sqladmin.googleapis.com/sql/v1beta4/projects/$PROJECT/instances/$INSTANCE?alt=json>: response: <{'status': '400', 'content-length': '269', 'x-xss-protection': '0', 'x-content-type-options': 'nosniff', 'transfer-encoding': 'chunked', 'vary': 'Origin, X-Origin, Referer', 'server': 'ESF', '-content-encoding': 'gzip', 'cache-control': 'private', 'date': 'Fri, 14 Feb 2020 10:01:35 GMT', 'x-frame-options': 'SAMEORIGIN', 'alt-svc': 'quic=":443"; ma=2592000; v="46,43",h3-Q050=":443"; ma=2592000,h3-Q049=":443"; ma=2592000,h3-Q048=":443"; ma=2592000,h3-Q046=":443"; ma=2592000,h3-Q043=":443"; ma=2592000', 'content-type': 'application/json; charset=UTF-8'}>, content <{
"error": {
"code": 400,
"message": "The incoming request contained invalid data.",
"errors": [
{
"message": "The incoming request contained invalid data.",
"domain": "global",
"reason": "invalidRequest"
}
]
}
}
>
2020-02-14 11:01:35,868 ERROR root (gcloud.sql.instances.patch) HTTPError 400: The incoming request contained invalid data.
2020-02-14 11:01:35,898 DEBUG root Metrics reporting process started...
Edit:
When using the gcloud cli command:
gcloud patch with 3 input parameters
Both $PROJECT and $INSTANCE do exist since I can gcloud sql databases list --instance $INSTANCE --project $PROJECT and it works fine.
availability-type=regional it's documented so should work
I'm not constructing the request manually, I'm using gcloud CLI
When using the console.cloud.google.com web interface:
Main menu -> SQL -> select instance -> Enable High Availability.
It's a button, no parameters added by myself.
Both return the same error "The incoming request contained invalid data."
Can't see how I may be doing it wrong.
Please check your data in the incoming request.
I used the Method: instances.patch and it worked as expected for me.
project
instance-name
request body:
"settings": {
"availabilityType": "REGIONAL",
"databaseFlags": [
{
"name": "sql_mode",
"value": "TRADITIONAL"
},
{
"name": "default_time_zone",
"value": "+01:00"
}
]
}
}
Curl command:
'https://sqladmin.googleapis.com/sql/v1beta4/projects/your-project/instances/your_instancet?key=[YOUR_API_KEY]' \
--header 'Authorization: Bearer [YOUR_ACCESS_TOKEN]' \
--header 'Accept: application/json' \
--header 'Content-Type: application/json' \
--data '{"settings":{"availabilityType":"REGIONAL","databaseFlags":[{"name":"sql_mode","value":"TRADITIONAL"},{"name":"default_time_zone","value":"+01:00"}]}}' \
--compressed```
Response 200:
{
"kind": "sql#operation",
"targetLink": "https://content-sqladmin.googleapis.com/sql/v1beta4/projects/your-project/instances/your-instance",
"status": "PENDING",
"user": "#cloud.com",
"insertTime": "2020-02-14T12:35:37.615Z",
"operationType": "UPDATE",
"name": "3f55c1be-97b5-4d37-8d1f-15cb61b4c6cc",
"targetId": "your-instance",
"selfLink": "https://content-sqladmin.googleapis.com/sql/v1beta4/projects/wave25-vladoi/operations/3f55c1be-97b5-4d37-8d1f-15cb61b4c6cc",
"targetProject": "your-project"
}
I'm running a simple DataFlow pipeline w/ the Python SDK for counting keywords. The job runs fine for pre-processing the input data, but it fails for grouping/output steps with the following error.
I guess the logs says the worker is having an issue accessing the temp folder, but the storage bucket in our project exists with proper permissions. What could be a possible issue for this?
"/usr/local/lib/python2.7/dist-packages/apache_beam/io/gcsio.py", line
606, in write raise self.upload_thread.last_error # pylint:
disable=raising-bad-type HttpError: HttpError accessing
<https://www.googleapis.com/resumable/upload/storage/v1/b/[PROJECT-NAME-REDACTED]-temp-2016-08-07_04-42-52/o?uploadType=resumable&alt=json&name=0015bf8d-fa87-4c9a-82d6-8ffcd742d770>:
response: <{'status': '404', 'alternate-protocol': '443:quic',
'content-length': '165', 'vary': 'Origin, X-Origin', 'server':
'UploadServer', 'x-guploader-uploadid':
'AEnB2UoYRPUwhz-OXlJ437k0J8Uxd1lJvTsFbfVJF_YMP2GQEvmdDpo7e-3DVhuqNd9b1A_RFPbfIcK6hCsFcar-hdI94rqJZUvATcDmGRRIvHecAt5CTrg',
'date': 'Sun, 07 Aug 2016 04:43:23 GMT', 'alt-svc': 'quic=":443";
ma=2592000; v="36,35,34,33,32,31,30"', 'content-type':
'application/json; charset=UTF-8'}>, content <{ "error": { "errors": [
{ "domain": "global", "reason": "notFound", "message": "Not Found" }
], "code": 404, "message": "Not Found" } } >
This is https://issues.apache.org/jira/browse/BEAM-539, which doesn't allow root buckets as outputs for TextFileSink. As a workaround, please use a subdirectory path (e.g. gs://foo/bar) as output locations.
I have created an API for CURD operation in Django REST from Rest browsable API I can view/update/delete records . But when I trie dto perform update via httpie it doesn't work.
Url - > http://localhost:8000/api/user/profile/1/
Result from browser->
{
"user": 3,
"subject": [
1,
3,
4
],
"phone": "897897897",
"address": "xcgsajgchagclkk"
}
httpie reques -> http PUT http://localhost:8000/api/user/profile/1/ user=3 subject=[1,2] phone=333 address=my
Error ->
{
"subject": [
"Expected a list of items but got type \"unicode\"."
]
}
As we can see the error is in format of data sent in request but I am sending the list in subject [1,2] . So why its giving the error.
Edit : Header of my request
HTTP/1.0 400 BAD REQUEST
Allow: GET, PUT, PATCH, DELETE, HEAD, OPTIONS
Content-Type: application/json
Date: Fri, 30 Oct 2015 05:33:58 GMT
Server: WSGIServer/0.1 Python/2.7.6
Vary: Accept, Cookie
X-Frame-Options: SAMEORIGIN
As #BogdanIulianBursuc suggested in his comments Httpie use differnet syntax for submitting lists.
So the right syntax would be subject:='[1,2]'