We have suddenly started experiencing an error when using the DialogFlow "restore agent" API. The call is failing with the error:
400 com.google.apps.framework.request.BadRequestException: Invalid
agent zip. Missing required json file agent.json
Oddly, it only seems to happen for newly created DialogFlow agents, but not for older/existing ones. We are using this API so that we can programmatically create a custom agent using our own intents/entities. This code has been working for about the past two years, with no changes on our side. We are using the official DialogFlow client library for Python. We have been on version 0.2.0, and I tried updating to the latest (0.8.0) but there was no change.
I tried changing our code to include the agent.json file (by using the "export agent" API and getting the agent.json file from there). In that case, I no longer get the above error and the restore appears to succeed. However, the agent then seems to be corrupt in some way. When trying to click on any intent -- or various other operations in the DialogFlow console -- I get the error:
Failed to get Training Phrases Errorid=xxx
(where xxx seems to be a UUID that changes each time)
Trying to export the agent in that state also displays an error:
Error downloading agent
Occasionally, even including the agent.json as above, the restore will still fail but return the error:
500 Internal error encountered.
I appreciate any ideas on how we can get this working again. Thanks!
After a lot of trial and error I found the solution. Here it is in case anyone else runs into this. Something must have changed recently in how DialogFlow processes the zip upload during the "restore agent" operation --
1) The agent.json file is now required in the zip file, where before it was optional
2) We found some of the "id" elements in our _usersays files for various intents were not valid UUIDs. Previously this did not cause any error, but now the agent winds up in an invalid state ("Failed to get Training Phrases" error, etc as mentioned above).
Easy way to fix is to export one of the existing agents and copy it's agent.json and package.json into your current directory before uploading.
agent.json is now required by dialogflow.
Related
I am receiving an error when updating my report. This is a report that has 2 sources, one in SQL Server and one in MariaDB. I have no problem assigning these two sources, however when I try to automate the report and manually update it gives me the following error:
enter image description here
I have tried to check and clean the file sources but it doesn't work.
I am attempting to clone a database. I was able to previous clone it in the console, but now I want to create a small script to automate this and it fails with the following error message:
(gcloud.sql.instances.clone) [ERROR_RDBMS] unable to update the following flags: cloudsql.enable_password_validation
If I attempt to clone it in the console, I get the same error shown above.
I looked up the documentation and enable_password_validation does not seem to be in the list of supported flags, which would explain why it can't update it.
If I run gcloud sql instances describe my-instance, I don't see the flag in question.
But running on the source instance:
SELECT * FROM pg_settings
yields this row in particular:
name
setting
unit
category
short_desc
extra_desc
context
vartype
source
min_val
max_val
enumvals
boot_val
reset_val
sourcefile
sourceline
pending_restart
cloudsql.enable_password_validation
off
NULL
Customized Options
Sets whether to enable Cloud SQL password validation.
NULL
superuser
bool
configuration file
NULL
NULL
NULL
on
off
/pgsql/data/postgresql.auto.conf
3
False
Any advice on how to solve this?
There is currently an ongoing issue with password validation in Cloud SQL Postgres instances. The issue involves the exact flag that is giving you problems cloudsql.enable_password_validation:
Diagnosis: Affected postgres instances from a recent release have the following flag set and are unable to remove or disable this flag: cloudsql.enable_password_validation=on. This flag does not appear in Cloud Console, and attempting to disable flag via gcloud returns error where the flag is not recognized or supported. Password validation occurs on every new client connection but is limited to 50 QPS, and thus higher rates will return errors.
When did this issue start occurring? Have you also attempted to clone the database since then? This is due to the issue receiving several updates. If you continue experiencing issues, you could open a support case with GCP as the status page recommends.
EDIT (2/24/2022)
I wanted to update this answer. The issue seems to be resolved as shown in the status page of Google Cloud:
The issue with Cloud SQL has been resolved for all affected instances as of Tuesday, 2022-02-22 14:30 US/Pacific. We thank you for your patience while we worked on resolving the issue.
If you still see this error, you can update the question confirming that it was not resolved as part of the outage resolution.
I am trying to read from an S3 bucket in a Java flink (version 1.11) application. I am trying to read from an S3 file into a DataSet object using the following code:
ExecutionEnvironment executionEnv = ExecutionEnvironment.getExecutionEnvironment();
DataSet<String> dataSet = executionEnv.readTextFile("s3a://<bucket>/<key>");
Every time I try to run this I run into this error:
Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could not find a file system implementation for scheme 's3a'
...
When I look at other posts about this error, it is because I need to configure flink-s3-fs-hadoop in my environment. I have tried configuring it for a while now, but the same error keeps occurring--I have the .jar for fink-s3-fs-hadoop in my plugins directory but it is not even recognized.
What are the steps for making this work within an IDE? All documentation online is very vague.
Is it possible to change the name of an Expo project without having to go through the entire process of building it again and submitting to the app store?
I accidentally didn't update the project name in the app.json file and now am stuck with an app called exmilti in TestFlight.
It took a few days to build, submit, and get the approval for TestFlight so I would love to avoid that process if there is a simple fix.
When I attempt to rebuild it in the CLI with the new name I get an error:
Reason: Unexpected response, raw:
{"responseId":"ed00c05f-82a0-41d6-9a7c-b48d04e68a1a","resultCode":35,"resultString":"There
were errors in the data supplied. Please correct and
re-submit.","userString":"Multiple profiles found with the name
'com.myComapanyName.AppName AppStore'. Please remove the duplicate profiles
and try again."
Which means to me that I am going to have to fully remove the app from TestFlight (yikes) and then re-upload the newly named App and wait for them to approve it again.
Any advice?
Update - I did not find an easier way so I just compiled the project and pushed it via expo again with the correct name.
It didn't take as long as I had expected.
I'm new to Alexa Skill development and I'm sure this issue is process/environmental due to lack of experience.
Whenever I try to use a sample from an offical Alexa tutorial, I can never get the skill to pass the first TEST - always getting an error :(
In this case I am trying to run and fiddle with this tutorial:
https://developer.amazon.com/blogs/post/TxHGKH09BL2VA1/New-Alexa-Skills-Kit-Template-Step-by-Step-Guide-to-Build-a-Decision-Tree-Skill
What is happening / What I've done:
I download the Node SDK from the Git link, I also download the sample from the Git link. I then create a new ZIP that contains the sample code with the Node SDK included in the path /src/alexa-sdk/
I go to AWS and create a new function, not using a blueprint. I 'author from scratch' and create a function with the Skills Kit as a trigger. I name the function and use Node 6.10 runtime.
I upload my ZIP file and leave all boxes default, for Role I choose Custom Role then pick Basic Execution from the Role screen.
I leave the rest blank, go to NEXT and CREATE.
The function is created okay, but I do see this error 'This function contains external libraries. Uploading a new file will override these libraries.'
Here's the problem - this is the point of failure on all tutorials I've tried so far. I go to Configure Test Event, I choose ALEXA START SESSION as the template and click Save And Test...
EXECUTION RESULT FAILED:
{
"errorMessage": "Cannot find module '/var/task/index'",
"errorType": "Error",
"stackTrace": [
"require (internal/module.js:20:19)"
]
}
Here's something from associated error logs, unsure if it's useful:
Unable to import module 'index': Error
at Module.require (module.js:497:17)
at require (internal/module.js:20:19)
I have noticed two things that I suspect may be an issue:
1) When I go to the CODE tab for this function, I see this message:
Your Lambda function "testprojectx" cannot be edited inline since the file name specified in the handler does not match a file name in your deployment package.
2) When I look at the code that's inserted into the test when I choose ALEXA SESSION START, I see many instances of 'unique value here':
amzn1.echo-api.session.[unique-value-here]
Although, there is no mention of this in the tutorial link I am referencing.
I'm really downhearted about it now as this is like the 3rd tutorial code I've tried to configure. Can anybody with experience follow the steps I've taken and point me in the right direction.
Thank you SO MUCH in advance if so.
EDIT: Absolute Clarification on how I am creating the ZIP file
I'm using Windows 10 and Chrome to download the files from GitHub.
I download the skill-sample-nodejs-decision-tree-master ZIP file from GitHub,
I do not know how to use NPM so I do this simply via downloading to desktop.
I then download the alexa-skills-kit-sdk-for-nodejs-master.ZIP file to desktop.
I unzip the contents of decision-tree-master into a folder on the desktop also called alexa-skills-kit-sdk-for-nodejs-master.
Within this folder, I navigate to /src/ and create a new folder called 'node_modules' within /src/.
Within /src/node_modules/ I now create another new folder called 'alexa-sdk'.
I unzip the contents of alexa-skills-kit-sdk-for-nodejs-master.zip into /src/node_modules/alexa-sdk/.
I have tried two approaches from here - both fail:
1) I ZIP only the contents of /src/ (not including the /src/ folder itself) and upload to Amazon.
2) I ZIP the entire 'decision-tree-master' folder and upload to Amazon.
I must be missing something, as I said this is just one of many Alexa tutorials I've tried to get working and this always happens :( So disheartened now.
This is common issue I have seen in many posts. Most of the cases it is the way zipping the files making the problem. Instead of zipping the folder you have to select all files and zip it like below,