Error updating report - Gateway must be updated to the latest version? - powerbi

I am receiving an error when updating my report. This is a report that has 2 sources, one in SQL Server and one in MariaDB. I have no problem assigning these two sources, however when I try to automate the report and manually update it gives me the following error:
enter image description here
I have tried to check and clean the file sources but it doesn't work.

Related

Error cloning database unable to update the following flags: cloudsql.enable_password_validation

I am attempting to clone a database. I was able to previous clone it in the console, but now I want to create a small script to automate this and it fails with the following error message:
(gcloud.sql.instances.clone) [ERROR_RDBMS] unable to update the following flags: cloudsql.enable_password_validation
If I attempt to clone it in the console, I get the same error shown above.
I looked up the documentation and enable_password_validation does not seem to be in the list of supported flags, which would explain why it can't update it.
If I run gcloud sql instances describe my-instance, I don't see the flag in question.
But running on the source instance:
SELECT * FROM pg_settings
yields this row in particular:
name
setting
unit
category
short_desc
extra_desc
context
vartype
source
min_val
max_val
enumvals
boot_val
reset_val
sourcefile
sourceline
pending_restart
cloudsql.enable_password_validation
off
NULL
Customized Options
Sets whether to enable Cloud SQL password validation.
NULL
superuser
bool
configuration file
NULL
NULL
NULL
on
off
/pgsql/data/postgresql.auto.conf
3
False
Any advice on how to solve this?
There is currently an ongoing issue with password validation in Cloud SQL Postgres instances. The issue involves the exact flag that is giving you problems cloudsql.enable_password_validation:
Diagnosis: Affected postgres instances from a recent release have the following flag set and are unable to remove or disable this flag: cloudsql.enable_password_validation=on. This flag does not appear in Cloud Console, and attempting to disable flag via gcloud returns error where the flag is not recognized or supported. Password validation occurs on every new client connection but is limited to 50 QPS, and thus higher rates will return errors.
When did this issue start occurring? Have you also attempted to clone the database since then? This is due to the issue receiving several updates. If you continue experiencing issues, you could open a support case with GCP as the status page recommends.
EDIT (2/24/2022)
I wanted to update this answer. The issue seems to be resolved as shown in the status page of Google Cloud:
The issue with Cloud SQL has been resolved for all affected instances as of Tuesday, 2022-02-22 14:30 US/Pacific. We thank you for your patience while we worked on resolving the issue.
If you still see this error, you can update the question confirming that it was not resolved as part of the outage resolution.

Errors when using DialogFlow "restore agent" API

We have suddenly started experiencing an error when using the DialogFlow "restore agent" API. The call is failing with the error:
400 com.google.apps.framework.request.BadRequestException: Invalid
agent zip. Missing required json file agent.json
Oddly, it only seems to happen for newly created DialogFlow agents, but not for older/existing ones. We are using this API so that we can programmatically create a custom agent using our own intents/entities. This code has been working for about the past two years, with no changes on our side. We are using the official DialogFlow client library for Python. We have been on version 0.2.0, and I tried updating to the latest (0.8.0) but there was no change.
I tried changing our code to include the agent.json file (by using the "export agent" API and getting the agent.json file from there). In that case, I no longer get the above error and the restore appears to succeed. However, the agent then seems to be corrupt in some way. When trying to click on any intent -- or various other operations in the DialogFlow console -- I get the error:
Failed to get Training Phrases Errorid=xxx
(where xxx seems to be a UUID that changes each time)
Trying to export the agent in that state also displays an error:
Error downloading agent
Occasionally, even including the agent.json as above, the restore will still fail but return the error:
500 Internal error encountered.
I appreciate any ideas on how we can get this working again. Thanks!
After a lot of trial and error I found the solution. Here it is in case anyone else runs into this. Something must have changed recently in how DialogFlow processes the zip upload during the "restore agent" operation --
1) The agent.json file is now required in the zip file, where before it was optional
2) We found some of the "id" elements in our _usersays files for various intents were not valid UUIDs. Previously this did not cause any error, but now the agent winds up in an invalid state ("Failed to get Training Phrases" error, etc as mentioned above).
Easy way to fix is to export one of the existing agents and copy it's agent.json and package.json into your current directory before uploading.
agent.json is now required by dialogflow.

How to fix `user must specify LSN` when using AWS DMS for Postgres RDS

I'm trying to migrate and synchronize a PostgreSQL database using AWS DMS and I'm getting the following error.
Last Error Task error notification received from subtask 0, thread 0
[reptask/replicationtask.c:2673] [1020101] When working with Configured Slotname, user must
specify LSN; Error executing source loop; Stream component failed at subtask 0, component
st_0_D27UO7SI6SIKOSZ4V6RH4PPTZQ ; Stream component 'st_0_D27UO7SI6SIKOSZ4V6RH4PPTZQ'
terminated [reptask/replicationtask.c:2680] [1020101] Stop Reason FATAL_ERROR Error Level FATAL
I already created a replication slot and configured its name in the source endpoint.
DMS Engine version: 3.1.4
Does anyone knows anything that could help me?
Luan -
I experienced the same issue - I was trying to replicate data from Postgres to an S3 bucket.I would check two things - your version of Postgres and the DMS version being used.
I downgraded my RDS postgres version to 9.6 and my DMS version to 2.4.5 to get replication working.
You can find more details here -
https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.PostgreSQL.html
I wanted to try the newer versions of DMS (3.1.4 and 3.3.0[beta]) as it has parquet support but I have gotten the same errors you mentioned above.
Hope this helps.
It appears AWS expects you to use the pglogical extension rather than test_decoding. You have to:
add pglogical to shared_preload_libraries in parameter options
reboot
CREATE EXTENSION pglogical;
On dms 3.4.2 and postgres 12.3 without the slotName= setting DMS created the slot for itself. Also make sure you exclude the pglogical schema from the migration task as it has unsupported data types.
P.S. When DMS hits resource limits it silently fails. After resolving the LSN errors, I continued to get failures of the type Last Error Task 'psql2es' was suspended due to 6 successive unexpected failures Stop Reason FATAL_ERROR Error Level FATAL without any errors in the logs. I resolved this issue using the Advanced task settings > Full load tuning settings and tuning the parameters downward.

How can I set up Sitecore Commerce Demo Site (Sitecore.Demo.Retail) to exclude configuration errors?

When I set up the sitecore demo retail site (source - https://github.com/Sitecore/Sitecore.Demo.Retail), I encountered with several problems related to Sitecore Commerce configurations and Sitecore Engine Configurations. I will divide this issues:
I got the following error while running the install-commerce-server.ps1 script on step 5 (Commerce Server Configuration)
I got error 'HTTP Error 502.5 - Process Failure' by URL - http://habitat.commerceengine.dev.local:5000/api/$metadata
On 'reatil.dev.local' site I got error 'Could not find property 'shopName' on object of type: Sitecore.Commerce.Engine.Connect.CommerceEngineConfiguration', when I tried to go on any page with products
I encountered with some errors of Sitecore Commerce Applications (Merchandising Manager, Pricing & Promotions) in Sitecore Experience Platform
However, I have resolved this issues and I hope that this info will be useful for set up of Sitecore Demo Retail site (https://github.com/Sitecore/Sitecore.Demo.Retail).
I have repeated instuctions for install of Sitecore.Demo.Retail and fixed related issues:
This issue had discussed in https://github.com/Sitecore/Sitecore.Demo.Retail/issues/81. You need to check file 'Server2012_FeaturesRequired.txt' like it stated in issued-81. Then you must check file csconfig.xml (path for me - 'c:\Projects\Sitecore.Demo.Retail\install'). I had bad SQL connection to MSSQL Server, which was default. Here example of working variant:
By another way you can run Commerce Server Configurator manually by 'CSConfig.exe /f' (path for me - 'c:\Program Files (x86)\Commerce Server 11\'). Then you can load xml-configuration and set and test SQL connection:
This issue appeared on my environment, because i had wrong SQL connections (by default) in Commerce Engine project in Sitecore.Demo.Retail solution. You must to change all connections in the following files Global.json, Habitat.CommerceAuthoring-1.0.0.json, Habitat.CommerceShops-1.0.0.json.
!!!Don't afraid to check appropriate configs in deployed solution
This error is appeared due the wrong tags (storefront) in 'commerceEngineConfiguration' tag. You need to remove this tags in Sitecore.Demo.Retail.config file. Working variant for example in showConfig.aspx:
You should to check connection strings in file Z.Sitecore.Commerce.UX.Shared.config (path for me - c:\websites\habitat.dev.local\Website\App_Config\Include). By default I had 'localhost:5000/...'

WSO2CEP 4.2.0 error: A mandatory attribute null does not exist

We are using WSO2CEP version 4.2.0. We are connecting to a MySQL database (version 5.6.34-1 community edition from Oracle) on the back-end with mysql-connector-java-5.1.40.jar. We have set up several connections in the master-datasources.xml, and receive "Connection is healthy" for all connections when testing them in Datasources. When we attempt to use an event publisher that accesses the referenced databases an error appears:
[2017-01-24 17:11:22,178] ERROR {org.wso2.carbon.event.publisher.admin.EventPublisherAdminService} - org.wso2.carbon.event.output.adapter.core.exception.OutputEventAdapterRuntimeException: A mandatory attribute null does not exist
org.wso2.carbon.event.publisher.core.exception.EventPublisherConfigurationException: org.wso2.carbon.event.output.adapter.core.exception.OutputEventAdapterRuntimeException: A mandatory attribute null does not exist
at org.wso2.carbon.event.publisher.core.EventPublisherDeployer.processDeployment(EventPublisherDeployer.java:227)
at org.wso2.carbon.event.publisher.core.EventPublisherDeployer.executeManualDeployment(EventPublisherDeployer.java:249)
.........several lines after this ...............
Our team are kind of at a loss, we have tried things like giving blanket permissions including DDL to the database user, trying an old database that "used to work", and changing out versions of the mysql-connector-java jar.
We found that we had a configuration problem - invalid XML in output-event-adapters.xml that was causing the error. Bad XML fixed, error gone.
WSO2, please consider addressing error verbosity in your products. The error that was being logged provided no indication that invalid XML might be the cause, and we wasted several hours troubleshooting the problem as a result. We have experienced similar error-verbosity-related issues in other WSO2 products. A simple "could not parse XML" with the file name would have literally saved us several hours this time.