How to setup query in Google cloud monitoring plugin in Grafana? - google-cloud-platform

I'm getting unexpected error when I'm starting to write query,
An unexpected error happened
Anyone help me to resolve the issue?
I'm expecting to have a query board so I can setup the dashboard.

Related

In an Amazon AppFlow sync between S3 and Salesforce, how can I resolve a vague error that processing failed "with message: null"?

Starting last week, an Amazon AppFlow flow I have to sync CSVs from S3 to Salesforce Accounts has been failing with this vague error:
Found a part that failed processing 2021-03-15T16:00:05.581Z with
message: null
There are no further details in the S3 folder for transfer errors, or in our CloudWatch events. (We have both enabled.) In addition, it looks like someone is experiencing the same problem in this AWS Forum thread. I've added some additional details in there, too.
I've tried reducing the number of rows in the sync, editing the flow, and even recreating it. But it still persists.
This error was happening because it was failing a Validation where the action was "Terminate Flow" (rather than ignore the offending row).
AWS support acknowledged that the error message is unclear and is working on a fix.
For now, the workaround is to fix the underlying data issue that is causing the validation rule to fail.

How to get Error logs to appear in Stack Driver Error Reporting?

I am having difficulties integrating with Stackdriver Error Reporting.
When using the stack driver log viewer I can see that it has correctly identified it as an error event due to the orange '!!' on the log line.
The logs are coming from a Java Application in a pod on Kubernetes. I am using SLF4J and logback to control my logging. I realise this is not the example in the docs which suggest fluentd however I would like to avoid changing my logging across all applications.
Following the troubleshooting guide I am able to submit a log line that is picked up and also directly report an error. This makes me think the issue must be permissions related. I have tried adding the "Error Reporting Admin" role onto the compute engine default service account and onto Kubernetes Engine Service Agent but this has not worked.
Am I missing something?
The !! in the logs viewer means that the LogEntry.severity field has a value of ERROR (which is provided by the client that wrote the entry). Entries that land in Error Reporting need to meet a few other criteria: https://cloud.google.com/error-reporting/docs/formatting-error-messages
You might also be interested in the details on how errors are grouped together: https://cloud.google.com/error-reporting/docs/grouping-errors
The "Error Reporting Admin" role would allow someone (or a service account) to perform actions like muting an error group. There are no permissions requirements to get data from logging into Error Reporting.

Cloud composer unstable UI

The Airflow UI randomly fails to show up and a 503 google error message is shown. It's getting really hard and annoying to navigate the Airflow UI. Is this a known issue? After searching this for a long time, on the internet I did not get any leads.
Please let me know if I'm doing something wrong.
I have attached the error message that I'm getting randomly. Hope this gets fixed.
Would you happen to be trying to access the Airflow UI from Asia? This is a known issue for APAC users, and a remedy is coming soon. In the meantime, a workaround is to deploy a self-managed webserver.

Enabling GCP Cloud Machine Learning Engine error

As I was trying to access Cloud ML Engine in GCP, I was asked to enable its API first. So when I did it, I got the following error:
Update failed with following error(s) for project settings: -- Backend Provisioning Error: {ml.googleapis.com INTERNAL: API enabling failed in operation operations/ml_enable_api/xxxxxxxxx/yyyyyyyyyyyyyyyyy for project xxxxxxxxx};
I don't know how to address this issue. Any insight will be appreciated!
Please try again. This most likely indicates a transient error enabling one of the GCP APIs. If the problem persists please email us at cloudml-feedback#google.com.

WSO2 API Manager Post Upgrade Error

We recently upgraded our Single Instance API Manager from 1.90 to 1.10. Upgrade seemed to be mostly successful, but anytime I try to load one of the services in the publisher, it freezes up and the log reports:
Error while retrieving the lifecycle actions for lifecycle: APILifeCycle in lifecycle state: null
at org.wso2.carbon.governance.api.common.dataobjects.GovernanceArtifactImpl.getAllLifecycleActions(GovernanceArtifactImpl.java:783)
at org.wso2.carbon.apimgt.impl.APIProviderImpl.getAPILifeCycleData(APIProviderImpl.java:3306)
... 101 more
Caused by: org.wso2.carbon.registry.core.exceptions.RegistryException: Resource at '/_system/governance/apimgt/applicationdata/provider/<User>/<API>/v1/api' not associated with aspect 'APILifeCycle'
at org.wso2.carbon.registry.core.jdbc.EmbeddedRegistry.getResourceAspect(EmbeddedRegistry.java:2592)
at org.wso2.carbon.registry.core.jdbc.EmbeddedRegistry.getAspectActions(EmbeddedRegistry.java:2627)
at org.wso2.carbon.registry.core.caching.CacheBackedRegistry.getAspectActions(CacheBackedRegistry.java:474)
at org.wso2.carbon.registry.core.session.UserRegistry.getAspectActionsInternal
And Also (I replaced sensitive info with <>):
org.wso2.carbon.registry.core.exceptions.RegistryException: Resource at '/_system/governance/apimgt/applicationdata/provider/<USER>/<SERVICE>/v1/api' not associated with aspect 'APILifeCycle'
at org.wso2.carbon.registry.core.jdbc.EmbeddedRegistry.getResourceAspect(EmbeddedRegistry.java:2592)
at org.wso2.carbon.registry.core.jdbc.EmbeddedRegistry.getAspectActions(EmbeddedRegistry.java:2627)
at org.wso2.carbon.registry.core.caching.CacheBackedRegistry.getAspectActions(CacheBackedRegistry.java:474)
I have spent a ton of time trying to locate the code that this error occurs with no luck (I'm not a java developer anyway). Do you guys have any ideas on what is causing this error? All of the services that are registered were registered pre-upgrade.
UPDATE:
As suggested below, this is because the migration is failing. I dug in a little deeper and discovered that the migration is failing due to bad SQL. Within the migration client, specifically the file MigrateFrom19To110.java, on Line 189 it is executing an Ad Hoc SQL statement :
"UPDATE IDN_OAUTH2_ACCESS_TOKEN SET AUTHZ_USER = ? WHERE AUTHZ_USER = ?"
This throws this error:
Must declare the scalar variable "#P0WHERE"
It seems like there is something wrong with the code that is building this statement, because the #POWHERE seems to be coming from the jdbc driver code. Can anyone shed more light on this?
Did you run the migration client during the upgrade? seems like migration hasn't gone correctly. Steps are given in https://docs.wso2.com/display/AM1100/Upgrading+from+the+Previous+Release
The reason I'm saying this is because as part of the migration we attach 'APILifeCycle' lifecycle to existing apis. Since the error says the api is not associated with the lifecycle I can guess the problem is with the migration