I tried uploading code as a zip file to AWS Lambda, but it's throwing an error: " unhandled error reporting in developer mode only. check the console logs for details. message: cannot read property 'renderer' of null".
I was also facing the same problem.
The uploaded the zip which contains code and library packages is conflicting with existing lambda code and package structure.
Just refresh the page then your problem will be solved
Related
I am trying to generate thumbnails from videos in an S3 bucket every x frames by following this documentation: https://aws.amazon.com/blogs/media/processing-user-generated-content-using-aws-lambda-and-ffmpeg/
I am at the point where I'm testing the Lambda code provided in the documentation, but receive this error in CloudWatch Logs:
Here is the portion of the Lambda code associated with this error:
Any help is appreciated. Thanks!
I am trying to read from an S3 bucket in a Java flink (version 1.11) application. I am trying to read from an S3 file into a DataSet object using the following code:
ExecutionEnvironment executionEnv = ExecutionEnvironment.getExecutionEnvironment();
DataSet<String> dataSet = executionEnv.readTextFile("s3a://<bucket>/<key>");
Every time I try to run this I run into this error:
Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could not find a file system implementation for scheme 's3a'
...
When I look at other posts about this error, it is because I need to configure flink-s3-fs-hadoop in my environment. I have tried configuring it for a while now, but the same error keeps occurring--I have the .jar for fink-s3-fs-hadoop in my plugins directory but it is not even recognized.
What are the steps for making this work within an IDE? All documentation online is very vague.
I'm trying to create a simple GCP Cloud Function via GCP console but the zip upload fails every time without a detailed reason:
the zip file includes the source files (and not a file with the source files).
in that way, the function isn't being created. I've tried to search online but couldn't find an answer.
screenshot of the error message
Thanks.
We have suddenly started experiencing an error when using the DialogFlow "restore agent" API. The call is failing with the error:
400 com.google.apps.framework.request.BadRequestException: Invalid
agent zip. Missing required json file agent.json
Oddly, it only seems to happen for newly created DialogFlow agents, but not for older/existing ones. We are using this API so that we can programmatically create a custom agent using our own intents/entities. This code has been working for about the past two years, with no changes on our side. We are using the official DialogFlow client library for Python. We have been on version 0.2.0, and I tried updating to the latest (0.8.0) but there was no change.
I tried changing our code to include the agent.json file (by using the "export agent" API and getting the agent.json file from there). In that case, I no longer get the above error and the restore appears to succeed. However, the agent then seems to be corrupt in some way. When trying to click on any intent -- or various other operations in the DialogFlow console -- I get the error:
Failed to get Training Phrases Errorid=xxx
(where xxx seems to be a UUID that changes each time)
Trying to export the agent in that state also displays an error:
Error downloading agent
Occasionally, even including the agent.json as above, the restore will still fail but return the error:
500 Internal error encountered.
I appreciate any ideas on how we can get this working again. Thanks!
After a lot of trial and error I found the solution. Here it is in case anyone else runs into this. Something must have changed recently in how DialogFlow processes the zip upload during the "restore agent" operation --
1) The agent.json file is now required in the zip file, where before it was optional
2) We found some of the "id" elements in our _usersays files for various intents were not valid UUIDs. Previously this did not cause any error, but now the agent winds up in an invalid state ("Failed to get Training Phrases" error, etc as mentioned above).
Easy way to fix is to export one of the existing agents and copy it's agent.json and package.json into your current directory before uploading.
agent.json is now required by dialogflow.
I'm trying to build an Alexa prototype for a client using this tutorial : https://developer.amazon.com/public/community/post/Tx3DVGG0K0TPUGQ/New-Alexa-Skills-Kit-Template:-Step-by-Step-Guide-to-Build-a-Fact-Skill
I am getting errors when I upload the zip file with the Alexskill.js and index.js files in it. I believe these are in the system itself and nothing to do with my code. Here is a screen grab of my browser console:
https://developer.amazon.com/public/community/post/Tx3DVGG0K0TPUGQ/New-Alexa-Skills-Kit-Template:-Step-by-Step-Guide-to-Build-a-Fact-Skill
There's no way to see if the zip file you upload has been successful (frustrating) - but this looks bad right?
Obviously, when I try and test the lambda function I get this error:
{
"errorMessage": "Cannot find module 'index'",
"errorType": "Error",
"stackTrace": [
"Function.Module._load (module.js:276:25)",
"Module.require (module.js:353:17)",
"require (internal/module.js:12:17)"
]
}
I desperately need to get this working. Has anyone got the code in one file that I can use to do this using the inline code editor? I am using the FactSkill demo which is very basic.
This is one of those 'I want to kick myself around the room' moments. In this article it tells you to download the ZIP archive from GIT and then upload it to the lambda control panel. When you do that on a mac it unzips it into a folder for you. I then zipped that folder back up and uploaded it. That was my problem ...
You need to zip the two files inside the folder and not the folder itself!
Then it can see the module from the archive.
DOH!!!
But, still ... Amazon, wtf is going on with all those errors?