When trying to add high availability on an existing Cloud SQL instance using:
gcloud sql instances patch $INSTANCE --project $PROJECT --availability-type regional
the process fails with this message
The following message will be used for the patch API method.
{"project": "$PROJECT", "name": "$INSTANCE", "settings": {"availabilityType": "REGIONAL", "databaseFlags": [{"name": "sql_mode", "value": "TRADITIONAL"}, {"name": "default_time_zone", "value": "+01:00"}]}}
ERROR: (gcloud.sql.instances.patch) HTTPError 400: The incoming request contained invalid data.
It also fails using the web interface.
Gcloud version Google Cloud SDK [280.0.0]
This is the output of the log (not much help that I can see):
2020-02-14 11:01:34,476 DEBUG root Loaded Command Group: [u'gcloud', u'sql', u'instances']
2020-02-14 11:01:34,510 DEBUG root Loaded Command Group: [u'gcloud', u'sql', u'instances', u'patch']
2020-02-14 11:01:34,517 DEBUG root Running [gcloud.sql.instances.patch] with arguments: [--availability-type: "regional", --project: "$PROJECT", INSTANCE: "$INSTANCE"]
2020-02-14 11:01:35,388 INFO ___FILE_ONLY___ The following message will be used for the patch API method.
2020-02-14 11:01:35,398 INFO ___FILE_ONLY___ {"project": "$PROJECT", "name": "$INSTANCE", "settings": {"availabilityType": "REGIONAL", "databaseFlags": [{"name": "sql_mode", "value": "TRADITIONAL"}, {"name": "default_time_zone", "value": "+01:00"}]}}
2020-02-14 11:01:35,865 DEBUG root (gcloud.sql.instances.patch) HTTPError 400: The incoming request contained invalid data.
Traceback (most recent call last):
File "C:\Users\udAL\AppData\Local\Google\Cloud SDK\google-cloud-sdk\lib\googlecloudsdk\calliope\cli.py", line 981, in Execute
resources = calliope_command.Run(cli=self, args=args)
File "C:\Users\udAL\AppData\Local\Google\Cloud SDK\google-cloud-sdk\lib\googlecloudsdk\calliope\backend.py", line 807, in Run
resources = command_instance.Run(args)
File "C:\Users\udAL\AppData\Local\Google\Cloud SDK\google-cloud-sdk\lib\surface\sql\instances\patch.py", line 306, in Run
return RunBasePatchCommand(args, self.ReleaseTrack())
File "C:\Users\udAL\AppData\Local\Google\Cloud SDK\google-cloud-sdk\lib\surface\sql\instances\patch.py", line 278, in RunBasePatchCommand
instance=instance_ref.instance))
File "C:\Users\udAL\AppData\Local\Google\Cloud SDK\google-cloud-sdk\lib\googlecloudsdk\third_party\apis\sql\v1beta4\sql_v1beta4_client.py", line 697, in Patch
config, request, global_params=global_params)
File "C:\Users\udAL\AppData\Local\Google\Cloud SDK\google-cloud-sdk\bin\..\lib\third_party\apitools\base\py\base_api.py", line 731, in _RunMethod
return self.ProcessHttpResponse(method_config, http_response, request)
File "C:\Users\udAL\AppData\Local\Google\Cloud SDK\google-cloud-sdk\bin\..\lib\third_party\apitools\base\py\base_api.py", line 737, in ProcessHttpResponse
self.__ProcessHttpResponse(method_config, http_response, request))
File "C:\Users\udAL\AppData\Local\Google\Cloud SDK\google-cloud-sdk\bin\..\lib\third_party\apitools\base\py\base_api.py", line 604, in __ProcessHttpResponse
http_response, method_config=method_config, request=request)
HttpBadRequestError: HttpError accessing <https://sqladmin.googleapis.com/sql/v1beta4/projects/$PROJECT/instances/$INSTANCE?alt=json>: response: <{'status': '400', 'content-length': '269', 'x-xss-protection': '0', 'x-content-type-options': 'nosniff', 'transfer-encoding': 'chunked', 'vary': 'Origin, X-Origin, Referer', 'server': 'ESF', '-content-encoding': 'gzip', 'cache-control': 'private', 'date': 'Fri, 14 Feb 2020 10:01:35 GMT', 'x-frame-options': 'SAMEORIGIN', 'alt-svc': 'quic=":443"; ma=2592000; v="46,43",h3-Q050=":443"; ma=2592000,h3-Q049=":443"; ma=2592000,h3-Q048=":443"; ma=2592000,h3-Q046=":443"; ma=2592000,h3-Q043=":443"; ma=2592000', 'content-type': 'application/json; charset=UTF-8'}>, content <{
"error": {
"code": 400,
"message": "The incoming request contained invalid data.",
"errors": [
{
"message": "The incoming request contained invalid data.",
"domain": "global",
"reason": "invalidRequest"
}
]
}
}
>
2020-02-14 11:01:35,868 ERROR root (gcloud.sql.instances.patch) HTTPError 400: The incoming request contained invalid data.
2020-02-14 11:01:35,898 DEBUG root Metrics reporting process started...
Edit:
When using the gcloud cli command:
gcloud patch with 3 input parameters
Both $PROJECT and $INSTANCE do exist since I can gcloud sql databases list --instance $INSTANCE --project $PROJECT and it works fine.
availability-type=regional it's documented so should work
I'm not constructing the request manually, I'm using gcloud CLI
When using the console.cloud.google.com web interface:
Main menu -> SQL -> select instance -> Enable High Availability.
It's a button, no parameters added by myself.
Both return the same error "The incoming request contained invalid data."
Can't see how I may be doing it wrong.
Please check your data in the incoming request.
I used the Method: instances.patch and it worked as expected for me.
project
instance-name
request body:
"settings": {
"availabilityType": "REGIONAL",
"databaseFlags": [
{
"name": "sql_mode",
"value": "TRADITIONAL"
},
{
"name": "default_time_zone",
"value": "+01:00"
}
]
}
}
Curl command:
'https://sqladmin.googleapis.com/sql/v1beta4/projects/your-project/instances/your_instancet?key=[YOUR_API_KEY]' \
--header 'Authorization: Bearer [YOUR_ACCESS_TOKEN]' \
--header 'Accept: application/json' \
--header 'Content-Type: application/json' \
--data '{"settings":{"availabilityType":"REGIONAL","databaseFlags":[{"name":"sql_mode","value":"TRADITIONAL"},{"name":"default_time_zone","value":"+01:00"}]}}' \
--compressed```
Response 200:
{
"kind": "sql#operation",
"targetLink": "https://content-sqladmin.googleapis.com/sql/v1beta4/projects/your-project/instances/your-instance",
"status": "PENDING",
"user": "#cloud.com",
"insertTime": "2020-02-14T12:35:37.615Z",
"operationType": "UPDATE",
"name": "3f55c1be-97b5-4d37-8d1f-15cb61b4c6cc",
"targetId": "your-instance",
"selfLink": "https://content-sqladmin.googleapis.com/sql/v1beta4/projects/wave25-vladoi/operations/3f55c1be-97b5-4d37-8d1f-15cb61b4c6cc",
"targetProject": "your-project"
}
Related
I want send the PromQL query to amazon-managed-prometheus via awscli, But I am not able to filter result based on namespace.
I am able to send the same filter in local prometheus via prometheus_api_client.PrometheusConnect, but can not use the same in aws(because of auth)
Is there any way?
awscurl -X POST --region ap-southeast-1 --access_key $KEY1 --secret_key $KEY --service aps "https://aps-workspaces.ap-southeast-1.amazonaws.com/workspaces/ws-2222-222-222-222-222/api/v1/query?query=sum(storage_level_sst_num) by (namespace, instance, level_index)"
{
"status": "success",
"data": {
"resultType": "vector",
"result": [
{
"metric": {
"instance": "10.0.3.68:1250",
"level_index": "0_MVGroup",
"namespace": "benchmark"
},
"value": [
1665730049.128,
"8"
]
}
]
}
}
~ %
s awscurl -X POST --region ap-southeast-1 --access_key $KEY1 --secret_key $KEY --service aps "https://aps-workspaces.ap-southeast-1.amazonaws.com/workspaces/ws-2222-222-222-222-222-2222/api/v1/query?query=sum(storage_level_sst_num{namespace="benchmark"}) by (instance, level_index)"
{"message":null}
Traceback (most recent call last):
File "/opt/homebrew/bin//awscurl", line 33, in <module>
sys.exit(load_entry_point('awscurl==0.26', 'console_scripts', 'awscurl')())
File "/opt/homebrew/Cellar/awscurl/0.26_1/libexec/lib/python3.10/site-packages/awscurl/awscurl.py", line 521, in main
inner_main(sys.argv[1:])
File "/opt/homebrew/Cellar/awscurl/0.26_1/libexec/lib/python3.10/site-packages/awscurl/awscurl.py", line 515, in inner_main
response.raise_for_status()
File "/opt/homebrew/Cellar/awscurl/0.26_1/libexec/lib/python3.10/site-packages/requests/models.py", line 953, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://aps-workspaces.ap-southeast-1.amazonaws.com/workspaces/ws-2222-222-222-222-222-2222/api/v1/query?query=sum(storage_level_sst_num%7Bnamespace=benchmark%7D)%20by%20(instance,%20level_index)
I have created a query that shows the expected behavior when executed manually. When I try to create a scheduled query from it, I always get the error BigQuery error in query operation: Request contains an invalid argument. It is 1:1 the same query except that I want to schedule it.
Both the Web Platform and the CLI display the same error BigQuery error in query operation: Request contains an invalid argument.. Even --apilog=stdout returns nothing meaningful to me.
We use a similar scheduled query in another table - the only difference is that the target and origin location in the new query is --location=europe-west3 instead of us.
Command:
bq query \
--append \
--display_name=XXX \
--schedule='every 24 hours' \
--destination_table=XXX \
--use_legacy_sql=false \
'[...]'
Stdout:
INFO:googleapiclient.model:--request-start--
INFO:googleapiclient.model:-headers-start-
INFO:googleapiclient.model:content-type: application/json
INFO:googleapiclient.model:accept-encoding: gzip, deflate
INFO:googleapiclient.model:accept: application/json
INFO:googleapiclient.model:user-agent: google-api-python-client/1.7.10 (gzip)
INFO:googleapiclient.model:-headers-end-
INFO:googleapiclient.model:-path-parameters-start-
INFO:googleapiclient.model:parent: projects/XXX/locations/-
INFO:googleapiclient.model:-path-parameters-end-
INFO:googleapiclient.model:body: {"destinationDatasetId": "III", "displayName": "scheduledQueryName", "schedule": "every 24 hours", "scheduleOptions": {"disableAutoScheduling": false}, "dataSourceId": "scheduled_query", "params": {"query": "[.........]", "write_disposition": "", "destination_table_name_template": "[.........]", "partitioning_field": ""}}
INFO:googleapiclient.model:query: ?authorizationCode=&alt=json
INFO:googleapiclient.model:--request-end--
INFO:googleapiclient.discovery:URL being requested: POST https://bigquerydatatransfer.googleapis.com/v1/projects/XXX/locations/-/transferConfigs?authorizationCode=&alt=json
INFO:googleapiclient.model:--response-start--
INFO:googleapiclient.model:status: 400
INFO:googleapiclient.model:content-length: 285
INFO:googleapiclient.model:x-xss-protection: 0
INFO:googleapiclient.model:x-content-type-options: nosniff
INFO:googleapiclient.model:transfer-encoding: chunked
INFO:googleapiclient.model:vary: Origin, X-Origin, Referer
INFO:googleapiclient.model:server: ESF
INFO:googleapiclient.model:-content-encoding: gzip
INFO:googleapiclient.model:cache-control: private
INFO:googleapiclient.model:date: Tue, 19 Nov 2019 14:06:45 GMT
INFO:googleapiclient.model:x-frame-options: SAMEORIGIN
INFO:googleapiclient.model:alt-svc: quic=":443"; ma=2592000; v="46,43",h3-Q050=":443"; ma=2592000,h3-Q049=":443"; ma=2592000,h3-Q048=":443"; ma=2592000,h3-Q046=":443"; ma=2592000,h3-Q043=":443"; ma=2592000
INFO:googleapiclient.model:content-type: application/json; charset=UTF-8
INFO:googleapiclient.model:{
"error": {
"code": 400,
"message": "Request contains an invalid argument.",
"errors": [
{
"message": "Request contains an invalid argument.",
"domain": "global",
"reason": "badRequest"
}
],
"status": "INVALID_ARGUMENT"
}
}
INFO:googleapiclient.model:--response-end--
BigQuery error in query operation: Request contains an invalid argument.
Any clue what BigQuery error in query operation: Request contains an invalid argument. can cause?
BigQuery Data Transfer Service does not yet support location europe-west3.
Please select a dataset in a supported location.
You have to change the location to EU
The easy-and-famous datalab create instance-name command is not longer working. We did not any change in project/apis/keys/ or any other google options.
The same cmd was ok yesterday and now:
user-used-yeserday#pruebaalexborrar:~$ datalab create alexborrarpurbea
ERROR: gcloud crashed (BadStatusCodeError): HttpError accessing
<https://sourcerepo.googleapis.com/v1/projects/pruebaalexborrar/repos
alt=json>:
response: <{'status': '500', 'content-length': '109', 'x-xss-protection':
'1; mod
e=block', 'x-content-type-options': 'nosniff', 'transfer-encoding':
'chunked', 'vary': 'Origin, X-Origin, Referer', 'server': 'ESF', '-content-
encoding': 'gzip', 'cache-control': 'private', 'date': 'Wed, 19 Apr 2017
09:08:43 G
MT', 'x-frame-options': 'SAMEORIGIN', 'content-type': 'application/json;
charset=UTF-8'}>, content <{
"error": {
"code": 500,
"message": "Internal error encountered.",
"status": "INTERNAL"
}
}
>
When I use the same URL on my browser to get the error, I got other different error:
{
"error": {
"code": 401,
"message": "Request is missing required authentication credential. Expected
OAuth 2 access token, login cookie or other valid authentication credential.
See https://developers.google.com/identity/sign-in/web/devconsole-project.",
"status": "UNAUTHENTICATED"
}
}
I guess the 401 error code is not related with the upper 501 from the `datalab create´ command...
I know google now is deploying new cloud release...
Anyone knows what's happening?
There is a reported issue as #37242989 at the issue tracker regarding this, so I suggest that you can add more details and star the issue there to get further updates from the related team working on this.
Following the quickstart for gcp dataflow here
I run into the following error when executing the example script here
using this command
declare -r PROJECT="beam-test"
declare -r BUCKET="gs://my-beam-test-bucket"
echo
set -v -e
python -m apache_beam.examples.wordcount \
--project $PROJECT \
--job_name $PROJECT-wordcount \
--runner DataflowRunner \
--staging_location $BUCKET/staging \
--temp_location $BUCKET/temp \
--output $BUCKET/output
which results in this error:
http_response.request_url, method_config, request)
apitools.base.py.exceptions.HttpError: HttpError accessing <https://dataflow.googleapis.com/v1b3/projects/beam-test/locations/us-central1/jobs?alt=json>: response: <{'status': '403', 'content-length': '284', 'x-xss-protection': '1; mode=block', 'x-content-type-options': 'nosniff', 'transfer-encoding': 'chunked', 'vary': 'Origin, X-Origin, Referer', 'server': 'ESF', '-content-encoding': 'gzip', 'cache-control': 'private', 'date': 'Fri, 31 Mar 2017 15:52:54 GMT', 'x-frame-options': 'SAMEORIGIN', 'alt-svc': 'quic=":443"; ma=2592000; v="37,36,35"', 'content-type': 'application/json; charset=UTF-8'}>, content <{
"error": {
"code": 403,
"message": "(f010d95b3e221bbf): Could not create workflow; user does not have write access to project: beam-test Causes: (f010d95b3e221432): Permission 'dataflow.jobs.create' denied on project: 'beam-test'",
"status": "PERMISSION_DENIED"
I have already enabled the DataFlow api for the project. And I have authorized the gcloud cli with the owner account of the project (which I assumes has full access).
How & where do I enable write permissions?
Change $PROJECT=project-name to $PROJECT=project-id
Have you tried running gcloud auth login to make sure you have a valid credential?
If yes, your default cloud project might be different than the one you're running Dataflow with. To change the default project, you can run gcloud init.
Let me know if that doesn't solve it.
I'm running a simple DataFlow pipeline w/ the Python SDK for counting keywords. The job runs fine for pre-processing the input data, but it fails for grouping/output steps with the following error.
I guess the logs says the worker is having an issue accessing the temp folder, but the storage bucket in our project exists with proper permissions. What could be a possible issue for this?
"/usr/local/lib/python2.7/dist-packages/apache_beam/io/gcsio.py", line
606, in write raise self.upload_thread.last_error # pylint:
disable=raising-bad-type HttpError: HttpError accessing
<https://www.googleapis.com/resumable/upload/storage/v1/b/[PROJECT-NAME-REDACTED]-temp-2016-08-07_04-42-52/o?uploadType=resumable&alt=json&name=0015bf8d-fa87-4c9a-82d6-8ffcd742d770>:
response: <{'status': '404', 'alternate-protocol': '443:quic',
'content-length': '165', 'vary': 'Origin, X-Origin', 'server':
'UploadServer', 'x-guploader-uploadid':
'AEnB2UoYRPUwhz-OXlJ437k0J8Uxd1lJvTsFbfVJF_YMP2GQEvmdDpo7e-3DVhuqNd9b1A_RFPbfIcK6hCsFcar-hdI94rqJZUvATcDmGRRIvHecAt5CTrg',
'date': 'Sun, 07 Aug 2016 04:43:23 GMT', 'alt-svc': 'quic=":443";
ma=2592000; v="36,35,34,33,32,31,30"', 'content-type':
'application/json; charset=UTF-8'}>, content <{ "error": { "errors": [
{ "domain": "global", "reason": "notFound", "message": "Not Found" }
], "code": 404, "message": "Not Found" } } >
This is https://issues.apache.org/jira/browse/BEAM-539, which doesn't allow root buckets as outputs for TextFileSink. As a workaround, please use a subdirectory path (e.g. gs://foo/bar) as output locations.