I'm trying to create a lambda function which takes apache log files from s3 bucket parses them into JSON documents and adds them to ES, as recommended in the following link :
https://github.com/awslabs/amazon-elasticsearch-lambda-samples
but I'm constantly facing the following error :
{
"errorMessage": "Cannot find module 'byline'",
"errorType": "Error",
"stackTrace": [
"Object.<anonymous> (/var/task/index.js:19:18)",
"Module._compile (module.js:409:26)",
"Object.Module._extensions..js (module.js:416:10)",
"Module.load (module.js:343:32)",
"Function.Module._load (module.js:300:12)",
"Module.require (module.js:353:17)”
]
}
Kindly, recommend a solution for this.
Apparently you aren't including the byline package required by the Lambda function. You have to run npm install locally and package up your source code and all dependencies into a zip file, and upload that to Lambda. Lambda will not run npm install for you and it expects all dependencies to be uploaded. This is documented here.
Try
npm -s install --production --prefix <folder>
then zip it and upload it.
Related
I am trying to deploy my Geo-Django app to Zappa
1st I got
django.core.exceptions.ImproperlyConfigured: Could not find the GDAL library
(tried "gdal", "GDAL", "gdal2.2.0", "gdal2.1.0", "gdal2.0.0", "gdal1.11.0",
"gdal1.10.0", "gdal1.9.0"). Is GDAL installed? If it is, try setting
GDAL_LIBRARY_PATH in your settings.
Then I followed this link and added the below
I set these environment variables in my AWS Lambda console:
"LD_LIBRARY_PATH": "/tmp/code/lib/",
"PROJ_LIB": "/tmp/code/lib/proj4/",
and in my (Django) app's settings file, I set:
GDAL_LIBRARY_PATH = "/tmp/code/lib/libgdal.so.20.1.3"
GEOS_LIBRARY_PATH = "/tmp/code/lib/libgeos_c.so.1"
Now I am getting the error
OSError: /tmp/code/lib/libgdal.so.20.1.3: cannot open shared object file: No such file or directory
How can I fix this ?
Summary of what I have done
$ pip install zappa
$ zappa init
$ zappa deploy prod
Below is my zappa_settings.json
{
"prod": {
"aws_region": "us-east-1",
"django_settings": "Cool.settings",
"profile_name": "default",
"project_name": "cool",
"runtime": "python3.6",
"s3_bucket": "coolplaces-t47c5adgt",
"extra_permissions": [{
"Effect": "Allow",
"Action": ["rekognition:*"],
"Resource": "*"
}]
}
}
I'm assuming you've bundled the two required libraries to your Lambda deployment package.
In the Lambda container, that gets extracted inside the /var/task directory. That directory is already in the LD_LIBRARY_PATH. Try setting the other necessary ENVVARS to /var/task as well.
Ok I think I almost got it
This is what I did
zappa undeploy prod
pip uninstall zappa
delete the zappa_settings.json file
Step 1)
$ pip install git+git://github.com/bahoo/Zappa.git#egg=zappa
Step 2) then type zappa init then you will see it automatically creates a file called zappa_settings.json
Add to your zappa_settings.json:
"project_directory": "/tmp/code", (Copy this as is no "," if this is the last statement)
"slim_handler": true (Use this if it gives you a error saying. Your file is too big. Which I am sure it will as the lib file size is 107.1 MB. Also no "," since this was my last statement in my zappa_settings.json no quotes for true)
Step 3) made a directory called lib in my root directory and copied the files to it. (Copy these files) See images below
https://imgur.com/yyd0ixn
Step4)
In your AWS lambda console.
"LD_LIBRARY_PATH": "/tmp/code/lib/",
"PROJ_LIB": "/tmp/code/lib/proj4/",
Remember do not replace code keep it as is
https://imgur.com/a/UZIz65B
Step5) add these to your Django settings.py: (Do not replace code with your path keep it as is)
GDAL_LIBRARY_PATH = "/tmp/code/lib/libgdal.so.20.1.3"
GEOS_LIBRARY_PATH = "/tmp/code/lib/libgeos_c.so.1"
Step 6) Finally, zappa deploy dev or zappa deploy prod whatever stage you want
Step 7) If it gives you errors do zappa tail it will give you all logs and tell you what the error is fix them and do zappa update
This was successful. Thank you bahoo so much for your help and taking the time to dumb it down for me. Also Thank you so much for making geodjango work on zappa
It gave me a error saying bad request told me to add a long amazon link to my allowed host. Did that. Now the next error was to add my data_base. I am doing that. But I feel I got it
For more details refer to
https://github.com/Miserlou/Zappa/issues/985
I am running the AWS DevSecOps project present here:
In the StaticCodeAnalysis stage of the pipeline I am getting AWS Lambda function failed.
On checking the log the error is:
"Unable to import module cfn_validate_lambda: No module named cfn_validate_lambda".
I checked that the S3 bucket that has the python code Zip and also ensured that the zip file has Public in the permissions.
Please let me know how to resolve this.
Thanks.
You have to package and zip the dependencies carefully...
The problem lies in the packaging hierarchy. After you install the dependencies in a directory, zip the lambda function as follows (in the example below, lambda_function is the name of my function)
Try this:
pip install requests -t .
zip -r9 lambda_function.zip .
zip -g lambda_function.zip lambda_function.py
I have a go application, structured like this:
cmd|reports|main.go
main.go imports internal/reports package and has a single function, main(), which delegates call to aws-lambda-go/lambda.Start() function.
Code is build running the commands (snippet):
cd internal/reports && go build handler.go
cd ../..
go build -o reports ../cmd/reports/main.go && chmod +x reports && zip reports.zip reports
reports.zip is uploaded to AWS Lambda, which in turns throws an error when Test button is pressed:
{
"errorMessage": "fork/exec /var/task/reports: exec format error",
"errorType": "PathError"
}
reports is set as Lambda's Handler.
Also, code is build on Ubuntu 14.04 machine, as a part of aws/codebuild/ubuntu-base:14.04 Docker Image, on AWS CodeBuild. There should be no environment issues here, even though the error suggests a cross-platform problem.
Any ideas?
You have to build with GOARCH=amd64 GOOS=linux.
Wherever you build your binary, the binary for Lambda is run on Amazon Linux.
So , try this build command.
GOARCH=amd64 GOOS=linux go build handler.go
The issue is that main() function is not declared in main package, which is mandatory by Golang language spec
I wrote a simple Alexa skill. It uses "alexa-app" as dependency.
var alexa = require('alexa-app');
When I save and test my skill I get the following response
{
"errorMessage": "Cannot find module 'alexa-app'",
"errorType": "Error",
"stackTrace": [
"Function.Module._load (module.js:276:25)",
"Module.require (module.js:353:17)",
"require (internal/module.js:12:17)",
"Object.<anonymous> (/var/task/index.js:4:13)",
"Module._compile (module.js:409:26)",
"Object.Module._extensions..js (module.js:416:10)",
"Module.load (module.js:343:32)",
"Function.Module._load (module.js:300:12)",
"Module.require (module.js:353:17)"
]
}
Is it possible to use this "alexa-app" dependency without baking it into a zip file. To make development quicker I'd prefer working with just one file in the online Lambda code editor. Is this possible?
No, you will need to include it in a zip along with any other files. It really isn't difficult to do though. You can use the AWS CLI to simplify this.
Here is a bash script that I use on my Mac for doing this:
# Create archive if it doesn't already exist
# Generally not needed, and just a refresh is performed
if [ ! -f ./Archive.zip ];
then
echo "Creating Lambda.zip"
else
echo "Updating existing Lambda.zip"
fi
# Update and upload new archive
zip -u -r Lambda.zip index.js src node_modules
echo "Uploading Lambda.zip to AWS Lambda";
aws lambda update-function-code --function-name ronsSkill --zip-file fileb://Lambda.zip
In the above script, It's packaging up an index.js file along with all the files in the ./src and ./node_modules directories. It is uploading them to my 'ronsSkill' Lambda function.
I use alexa-app also, and it is included in the node_modules directory by npm.
I'm trying to get casperjs to work with my AWS Lambda function.
{
"errorMessage": "Cannot find module 'casper'",
"errorType": "Error",
"stackTrace": [
"Function.Module._load (module.js:276:25)",
"Module.require (module.js:353:17)",
"require (internal/module.js:12:17)",
"Object.<anonymous> (/var/task/index.js:3:14)",
"Module._compile (module.js:409:26)",
"Object.Module._extensions..js (module.js:416:10)",
"Module.load (module.js:343:32)",
"Function.Module._load (module.js:300:12)",
"Module.require (module.js:353:17)"
]
}
I keep getting this error where Lambda can't detect casperjs. I uploaded my zip file into Lambda, and installed the casperjs modules into my directory before I zipped the files up.
My package.json file says I have casperjs installed.
{
"name": "lambda",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"keywords": [],
"author": "",
"license": "ISC",
"dependencies": {
"casperjs": "^1.1.3",
}
}
Would anyone know what I'm doing wrong? Thanks.
Since CasperJs relies on PhantomJs, you can set it up very similarly to this repo: https://github.com/TylerPachal/lambda-node-phantom.
The main difference being that you need to add and target CasperJs and you need to make sure that CasperJs can find and load PhantomJs.
Create a node_modules directory in your package directory.
Add a dependency for CasperJs to the packages.json file:
"dependencies": {
"casperjs": "latest"
}
In Terminal, navigate to your package directory and run 'npm update' to add the CasperJs package to the node_modules directory.
Assuming that you want to run CasperJs with the 'test' argument, the index.js file will need to be changed to look like this:
var childProcess = require('child_process');
var path = require('path');
exports.handler = function(event, context) {
// Set the path as described here: https://aws.amazon.com/blogs/compute/running-executables-in-aws-lambda/
process.env['PATH'] = process.env['PATH'] + ':' + process.env['LAMBDA_TASK_ROOT'];
// Set the path to casperjs
var casperPath = path.join(__dirname, 'node_modules/casperjs/bin/casperjs');
// Arguments for the casper script
var processArgs = [
'test',
path.join(__dirname, 'casper_test_file.js')
];
// Launch the child process
childProcess.execFile(casperPath, processArgs, function(error, stdout, stderr) {
if (error) {
context.fail(error);
return;
}
if (stderr) {
context.fail(error);
return;
}
context.succeed(stdout);
});
}
If you don't want to run CasperJs with the 'test' argument, just remove it from the arguments list.
The PhantomJs binary in the root directory of your package needs to be renamed to phantomjs, so that CasperJs can find it. If you would like to get a new version of PhantomJs, you can get one here: https://bitbucket.org/ariya/phantomjs/downloads. Make sure to download a linux-x86_64.tar.bz2 type so that it can run in Lambda. Once downloaded, just pull a new binary out of the bin directory and place it in your root package directory.
In order for Lambda to have permission to access all the files, it's easiest to zip the package in a Unix-like operating system. Make sure that all the files in the package have read and execute permissions. From within the package directory: chmod -R o+rx *. Then zip it up with: zip -r my_package.zip *.
Upload the zipped package to your Lambda function.
According to Casper.js Docs, it is not a actually Node Module. So you cannot require it in Package.json and zip it up with node modules. You will need to find how to install it on the lambda instance or find a an actual node module that does what you want. I suspect installing casper on lambda might not be possible, but that's just my gut.
Warning
While CasperJS is installable via npm, it is not a NodeJS module and will not work with NodeJS out of the box. You cannot load casper by using require(‘casperjs’) in node. Note that CasperJS is not capable of using a vast majority of NodeJS modules out there. Experiment and use your best judgement.
http://docs.casperjs.org/en/latest/installation.html