Unable to import module Error in AWS Lambda Nodejs - amazon-web-services

I am working on an AWS Lambda where I need to use AWS Textract, for this, I have used AWS-SDK, with AWS-SDK I was able to import s3, but AWS Textract is not working, when I deploy and test it shows,
Unable to import module 'src/functions/routes/handler': Error
I think it has something to do with packaging, like may be It is not packaging the relevant file, but not sure and I don't know how to make it package if it is the problem, any suggestions are appreciated.

Since you have not given any code to go on, I will consider the possibility that you have properly written your import code and this error is a valid response to an issue. In that case, you should note that there are issues with importing Textract in Lambda via the aws-sdk. See this reported issue:
https://github.com/aws/aws-sdk-js/issues/2728
For a thorough and thoughtful example of using Textract with a Lambda service, consider this excellent article posted with complete details:
https://medium.com/#sumindaniro/aws-textract-with-lambda-walkthrough-ed4473aedd9d

Related

"import requests" in AWS Lambda without a layer?

I've had a lambda function in Python 3.9 which is pretty simple. Just sends a post request. It has worked fine for the past few weeks.
I've deleted and remade the stack several times in CloudFormation, and have had no issues.
Randomly, today it started throwing the error
"errorMessage": "Unable to import module 'index': No module named 'requests'",
"errorType": "Runtime.ImportModuleError"
index.py is the name of the lambda function.
So I looked it up, and everyone says that 'requests' is not prepackaged into Lambda, and you need to create a layer where you manually install it into a zip etc.
How is this possible? It worked for weeks without having to create a layer, and the code was not touched.
But now 'import requests' just stops working?
There are of course other alternatives to the requests module. 'urllib' comes to mind. But this is bugging me. How is it possible that it worked for weeks, but randomly just stops being able to import this module?
Its like some kind of Mandela effect. Every single person says you need to create a venv, and package it with dependencies manually. I've never had to do that and it worked for AGES. I feel like im going crazy...
Did you possibly change the version of Python used by the function?
From Upcoming changes to the Python SDK in AWS Lambda | AWS Compute Blog:
In response to customer feedback, we have decided to cancel the change described in this blog post. The version of the AWS SDK included in the AWS Lambda runtimes for Python 2.7, Python 3.6 and Python 3.7 will continue to include the ‘requests’ module in Botocore. No action is required for customers using these runtimes. The Lambda runtimes for Python 3.8 and later do not include the ‘requests’ module. Customers using ‘requests’ with these runtimes should package the ‘requests’ module with their function code or as a Lambda layer.

AWS Glue Sagemaker Notebook "No module named awsglue.transforms"

I've created a Sagemaker notebook to dev AWS Glue jobs, but when running through the provided example ("Joining, Filtering, and Loading Relational Data with AWS Glue") I get the following error:
Does anyone know what I've setup wrong/haven't setup to cause the import to not work?
You'll need to download the library files from here for Glue 0.9 or here for Glue 1.0 (Check your Glue jobs for the version).
Put the zip in S3 and reference it in the "Python library path" on your Dev Endpoint.
I had the same issue and the selected solution did not work for me.
I did manage to get working by using cloud formation (AWS::Glue::DevEndpoint).
Through trial and error I noticed that you can't specify both NumberOfNodes and NumberOfWorkers at the same time. You have to specify one or the other.
Using NumberOfNodes: 5 resulted in the exact same error as specified in the question. But using the 2nd option worked perfectly.
So to conclude, to fix this error you can use CloudFormation and make sure to use the NumberOfWorkers property.
hm... this approach doesn't work for me.
I've just put zip to "Python library path", referenced to it and it doesn't work
Add AWSGlueServiceNotebookRole to your Dev Endpoint IAM Role, restart your kernel and rerun

Dynamically append function to AppSync Pipeline Resolver via CloudFormation

I am currently developing an AppSync based API in a domain driven manner, so we need to put a function to an already created Pipeline Resolver. Does anybody know if there is there any chance doing this via CloudFormation without using a custom resource?
Thanks in advance, Sven
Terraform can do this neatly if you can build this pull request yourself or vote in this issue to get it merged into a public release.
The new syntax is described here.
The build process is actually quite simple. It took me about 30 min end-to-end.
Install GoLang.
Clone the repo with the changes and sync it with the main (upstream) repo.
Make sure you cloned it into go\src\github.com\terraform-providers\terraform-provider-aws folder.
Run go build from go\src\github.com\terraform-providers\terraform-provider-aws
Replace .terraform\plugins\...\terraform-provider-aws-* executable with the one you compiled.
Run terraform init
Test by trying to import a function terraform import aws_appsync_function.example xxxxx-yyyyy
I hope the pull request gets merged by the time you read this.

Problem creating Lambda function that has a Layer using boto3

If I try to use boto3 Lambda create_function() to create a Lambda function, and I try to include Layers via Layers=['string'] parameter, I get the following error message:
Unknown parameter in input: "Layers", must be one of: FunctionName, Runtime, Role, Handler, Code, Description, Timeout, MemorySize, Publish, VpcConfig, DeadLetterConfig, Environment, KMSKeyArn, TracingConfig, Tags
... any ideas? The documentation suggests that this should work, but something is clearly off here. NOTE: I also have a similar problem with "Layers" in update_function_configuration() as well.
My guess is that the version of boto3 that the AWS Lambda console uses has not been updated/refreshed yet to support Layers. Because when I run the same code locally on a machine with a fairly recent version of boto3, it runs without any problems. I have already tried using both listed Python runtimes of 3.6 and 3.7 that in the AWS console, but neither worked. These runtimes have respective versions of boto3 of 1.7.74 and 1.9.42. But my local machine has 1.9.59. So perhaps the addition of Lambda Layers occurred between 1.9.42 and 1.9.59.
My guess is that the version of boto3 that the AWS Lambda console uses has not been updated/refreshed yet to support Layers.
That's completely right. AWS usually updates the available libraries on AWS Lambda regularly, but hasn't updated them for several months now for unknown reasons.
The supported API endpoints are actually not defined in boto3, but in botocore.
Currently botocore 1.10.74 is available on AWS Lambda, while support for AWS Lambda Layers got added in botocore 1.12.56.
To avoid such incompatibilities between your code and the versions of available libraries, you should create a deployment package containing boto3 and botocore in addition to your AWS Lambda function code, so your code uses your bundled versions instead the ones AWS provides. That's what AWS suggests as part of their best practices as well:
Control the dependencies in your function's deployment package.
The AWS Lambda execution environment contains a number of libraries such as the AWS SDK for the Node.js and Python runtimes (a full list can be found here: Lambda Execution Environment and Available Libraries). To enable the latest set of features and security updates, Lambda will periodically update these libraries. These updates may introduce subtle changes to the behavior of your Lambda function. To have full control of the dependencies your function uses, we recommend packaging all your dependencies with your deployment package.

Dialogflow using AWS Lambda webhook

I am trying to leverage AWS Lambda as the webhook for dialogflow call.
I want to use all the agent/google actions libraries.
So I copied the sample code and pasted it to Lambda what we get in Dialogflow console. I installed all the npm libraries. But when testing I am getting this issue:
TypeError: Cannot read property 'result' of undefined
at new WebhookClient (/var/task/node_modules/dialogflow-fulfillment/src/dialogflow-fulfillment.js:84:27)
at exports.dialogflowFirebaseFulfillment.functions.https.onRequest (/var/task/index.js:13:17)
at cloudFunction (/var/task/node_modules/firebase-functions/lib/providers/https.js:57:9)
I have exposed Lambda earlier but as the requirement is for the Dialogflow, I am not able to do so.
Note: I am passing the same request payload in Google Cloud and AWS Lambda, Google Cloud is returning correct but Lambda is returning error as mentioned above.
Not sure if I am missing any step or my understanding is wrong here.
Please help.
Can you please post some code here as well for more reference?
If you've installed the required npm libraries for dialog-flow then it should work as far as I can understand.
Also please note if you're trying to access input request parameters of DialogFlow such as {"userId": string,"idToken " string} it will not be accessible by default in the lambda events & hence you may face errors like Cannot read property 'result.originalRequest.user.idToken' of undefined.
You'll have to first enable google integrations on the fulfillment intents so that google can ask users the permission to send these attributes in the request body.
Also since this was asked many months ago please let me know if your issue was resolved.