Failed to convert server response to JSON | gcloud.services.operations.describe - google-cloud-platform

I'm new to google cloud services. I was going through some tutorials and I had to run the following command in order to describe an operation.
$ gcloud services operations describe operations/acf.xxxx
however, this command has failed with the error stating:
ERROR: (gcloud.services.operations.describe) INTERNAL: Failed to convert server response to JSON
I'm performing these operations in windows PowerShell using bash commands. Is there any solution to resolve this?

Possibly a bug.
I get the same error on Linux with gcloud v.291.0.0.
You may wish to report this issue at Google's issuetracker
A useful feature of gcloud is that you can append any command with --log-http to see the underlying REST API calls which is often (not really in this case) more illuminating of the error.
This yields (for me):
uri: https://serviceconsumermanagement.googleapis.com/v1beta1/operations/${OPERATION_ID}?alt=json
method: GET
...
{
"error": {
"code": 500,
"message": "Failed to convert server response to JSON",
"status": "INTERNAL"
}
}
Another excellent (debugging) tool is APIs Explorer that supports all Google's REST endpoints. This is accessible from API documentation:
https://cloud.google.com/service-infrastructure/docs/service-consumer-management/reference/rest/v1beta1/operations/get
If you complete the APIs Explorer (form) on the righthand side, I suspect, you'll receive the same error.
The approaches appear to confirm that the issue is Google-side.

Related

"Host header is specified and is not an IP address or localhost" message when using chromedp headless-shell

I'm trying to deploy chromedp/headless-shell to Cloud Run.
Here is my Dockerfile:
FROM chromedp/headless-shell
ENTRYPOINT [ "/headless-shell/headless-shell", "--remote-debugging-address=0.0.0.0", "--remote-debugging-port=9222", "--disable-gpu", "--headless", "--no-sandbox" ]
The command I used to deploy to Cloud Run is
gcloud run deploy chromedp-headless-shell --source . --port 9222
Problem
When I go to this path /json/list, I expect to see something like this
[{
"description": "",
"devtoolsFrontendUrl": "/devtools/inspector.html?ws=localhost:9222/devtools/page/B06F36A73E5F33A515E87C6AE4E2284E",
"id": "B06F36A73E5F33A515E87C6AE4E2284E",
"title": "about:blank",
"type": "page",
"url": "about:blank",
"webSocketDebuggerUrl": "ws://localhost:9222/devtools/page/B06F36A73E5F33A515E87C6AE4E2284E"
}]
but instead, I get this error:
Host header is specified and is not an IP address or localhost.
Is there something wrong with my configuration or is Cloud Run not the ideal choice for deploying this?
This specific issue is not unique to Cloud Run. It originates from an existing change in the Chrome DevTools Protocol which generates this error when accessing it remotely. It could be attributed to security measures against some types of attacks. You can see the related Chromium pull request here.
I deployed a chromedp/headless-shell container to Cloud Run using your configuration and also received the same error. Now, there is this useful comment in a GitHub issue showing a workaround for this problem, by passing a HOST:localhost header. While this does work when I tested it locally, it does not work on Cloud Run (returns a 404 error). This 404 error could be due to how Cloud Run also utilizes the HOST header to route requests to the correct service.
Unfortunately this answer is not a solution, but it sheds some light on what you are seeing and why. I would go for using a different service from GCP, such a GCE that are pure virtual machines and less managed.

Why cannot superset connect to Athena using PyAthena and rest scheme and throws HTTP 422 "unexpected error"?

Installing Superset with docker-compose. App is up and running. When adding a new database using PyAthena connector, error Unexpected error occurred, please check your logs for details happens with no details in the logs.
First, if you are using docker-compose, check whether you have added driver to the build environment.
echo "PyAthena>1.2.0" >> ./docker/requirements-local.txt
If you don't you will get Driver not found error.
Second, check your URI scheme. It must be of the following form:
awsathena+rest://AKIAXXXX:XXXXXX#athena.{region}.amazonaws.com/{database_name}?s3_staging_dir=s3://{bucket_name_for_results}
If you are missing the query string part you may get mysterious error without a detail reason.
Also note that PyAthena does not check you AK/SK against the staging bucket.

GCP Data Fusion no discoverable foud error

I'm trying to use GCP Data Fusion Basic Edition with Private IP option, buth when I try to create a pipeline every action gives me this error
No discoverable found for request POST /v3/namespaces/system/apps/pipeline/services/studio/methods/v1/contexts/default/validations/stage HTTP/1.1
any suggestion on how to solve this issue
Thanks
This error is indicative of Pipeline Studio service being down. Check the status of Pipeline Studio in System Admin and look at the logs as described here.
You can restart the pipeline studio service by going to System Admin > Configuration > Make HTTP Call.
Change the method to POST and set path to namespaces/system/apps/pipeline/services/studio/start
You can validate your pipeline once pipeline studio status becomes green.

Amazon Connect - cannot debug error in Get Customer Input Stage

I am just new to Amazon Connect and Lex and have just starting creating simple projects. I already have created an entire contact flow which uses Lex and Lambda for routing. Problem is in the "Get Customer Input" stage, it seems to always go to the error output and I could not figure out why. I tried to check if there's any way I can find logs for each stages in the contact flow but could not find any.
Can anyone help me solve this issue? I need to see logs to find out the cause of the error.
EDIT: I got the contact flow logs from cloudwatch. See below. I can't find any significant error from it.
{
"Results": "Error",
"ContactId": "<contact-id>",
"ContactFlowId": "<the contact flow id>",
"ContactFlowModuleType": "GetUserInput",
"Timestamp": "2019-07-08T08:27:01.185Z"
}
You might be getting error because you are getting error from your Lex and that is why the flow is going in error.
You can check the logs for connect and Lex in Amazon service - Amazon CloudWatch.
You can also provide details from logs/screenshot what exactly error you are getting, so that I can help.
This might be due to language settings mismatch.
If you're using LexV2 make sure you set the proper Language Attribute as well. Easiest way is using the set Voice block in your Contact Flow, on the very bottom of the block you can enable "set language attribute".
Original answer: https://repost.aws/questions/QUn9bLLnclQxmD_DMBgfB9_Q/amazon-connect-error-using-lex-as-customer-input

Authentication with Cognito - where to find logs

We have 2 React Native app are using AWS Cognito for authentication. We use library react-native-aws-cognito-js in our code. The apps are working fine until these 2 days. Apps are experiencing intermittent "Internal Server Error".
How can I find more information about this error? Any tool can help us pinpoint the cause?
Update
From CloudTrail, each API call has an event "CreateNetworkInterface". Many of such API calls have error code "Client.NetworkInterfaceLimitExceeded". What is the cause and solution to this?
According to this AWS Doc (in Chinese), CloudWatch will not write to log when error is due to insufficient IP/ENI. That explains the increase in error number but no logs in CloudWatch.
Upate 2
We have found a scheduled Lambda job which may exhausted IP addresses. We stopped the batch job. But still can't have too many user login to server due to "Client.NetworkInterfaceLimitExceeded" error. I realized that there are many "CreateNetworkInterface" event and few "DeleteNetworkInterface" event. How can I "clean up / reset" all network interface in VPC?
Short answer: Cloud Trail.
Long answer with a suggestion
Assuming your application code is fine, most likely the cause of your 500 error is based on Cognito's initial limitations (e.g., number of calls per user): https://docs.aws.amazon.com/cognito/latest/developerguide/limits.html.
AWS suggests to use Cloud Trail, for logging Api calls.
However I would suggest, to prove the limitations first, add some logs around the api call yourself, and in development you could call your app/api with a high number of calls; and most likely you will see the 500 error due to the limitations.
You could do the following in the terminal:
for i in `seq 1 1000`; do curl --cookie SecureCookie=TokenValueFromAWS http://localhost:desirablePort/SecuredPath; done