I wrote a simple Alexa skill. It uses "alexa-app" as dependency.
var alexa = require('alexa-app');
When I save and test my skill I get the following response
{
"errorMessage": "Cannot find module 'alexa-app'",
"errorType": "Error",
"stackTrace": [
"Function.Module._load (module.js:276:25)",
"Module.require (module.js:353:17)",
"require (internal/module.js:12:17)",
"Object.<anonymous> (/var/task/index.js:4:13)",
"Module._compile (module.js:409:26)",
"Object.Module._extensions..js (module.js:416:10)",
"Module.load (module.js:343:32)",
"Function.Module._load (module.js:300:12)",
"Module.require (module.js:353:17)"
]
}
Is it possible to use this "alexa-app" dependency without baking it into a zip file. To make development quicker I'd prefer working with just one file in the online Lambda code editor. Is this possible?
No, you will need to include it in a zip along with any other files. It really isn't difficult to do though. You can use the AWS CLI to simplify this.
Here is a bash script that I use on my Mac for doing this:
# Create archive if it doesn't already exist
# Generally not needed, and just a refresh is performed
if [ ! -f ./Archive.zip ];
then
echo "Creating Lambda.zip"
else
echo "Updating existing Lambda.zip"
fi
# Update and upload new archive
zip -u -r Lambda.zip index.js src node_modules
echo "Uploading Lambda.zip to AWS Lambda";
aws lambda update-function-code --function-name ronsSkill --zip-file fileb://Lambda.zip
In the above script, It's packaging up an index.js file along with all the files in the ./src and ./node_modules directories. It is uploading them to my 'ronsSkill' Lambda function.
I use alexa-app also, and it is included in the node_modules directory by npm.
Related
I have multiple lambdas in my AWS serverless project and they all have the same CodeUri which is a folder in my project. For example, they all point to src/.
In the basic usage, or at least how I use it, the sam build option create a folder for each lambda. All of the folders contain the same code. Then, when running sam deploy each zip is uploaded to S3. The zips are all the same and it takes a lot of redundent time to upload them all.
Is there an option "to tell" sam to upload it only once?
I saw that I can build the zip manually, then uploading it to S3. How can I then set the uri in the CodeUri of the lambdas? Should I do it using external parameter or there is dedicated way to signal it?
Thank you
After hard effort, using David Conde's idea, I managed to find some solution for it. What we want to achieve is uploading the lambda's zip (with all of it's dependencies) once and point all the lambdas to use this zip.
This process is seperated into couple of steps which I'll try to describe in much
details as I can. Some of them might not be relevant exactly for your case.
General idea
The general idea is to create a layer which contains the code we want to upload once. Then, for each lambda, specify it uses this layer. Each lambda will have it handler points to somewhere in the source directory. Hence, running code "from the layer". But! we must specify code for the lambda to run/attach a zip - even if it's not what going to run. To skip that, we are going to attach it an "empty" code.
Build folder
First, create a build folder where we are going to work, for example: mkdir -p .build/
In addition, define the following variables:
s3_bucket="aaaaaaaa"
s3_prefix="aaaaaaaa"
s3_lambda_zip_suffix="$s3_prefix/lambda.zip"
s3_lambda_zip="s3://$s3_bucket/$s3_lambda_zip_suffix"
Creating the source zip
When lambda is unzipped, it's content is written to the working directory. When a layer is unzipped, it is unzipped into /opt as documented in AWS. Because our lambda needs to find our source code, which is "a depndencie" it needs to find it under /opt. To achieve it, we need it to be unzipped into /opt/python. We can do that by zipping python/... into a zip file.
First, we create the folder we will install the dependencies into the python folder:
mkdir -p .build/lambda_zip/python
pip3 install -q --target .build/lambda_zip/python -r requirements.txt
Then we zip it:
pushd .build/lambda_zip/ > /dev/null
zip --quiet -r ./lambda.zip ./python
popd > /dev/null
Now, you probably want to add your src direcory:
zip --quiet -r .build/lambda_zip/lambda.zip src
#Uploading to S3
Now, we have to upload the zip into S3 for our lambdas to load it.
aws s3 cp ".build/lambda_zip/lambda.zip" $s3_lambda_zip_path
Adding layer to template.yaml
Now, we need to add the layer into our template.yaml file, you can copy the following code after you read at AWS documentation:
Parameters:
LambdaCodeUriBucket:
Type: String
LambdaCodeUriKey:
Type: String
Resources:
OnUpHealthLayer:
Type: AWS::Lambda::LayerVersion
Properties:
CompatibleRuntimes:
- python3.8
Content:
S3Bucket: !Sub '${LambdaCodeUriBucket}'
S3Key: !Sub '${LambdaCodeUriKey}'
Create empty zip for the lambdas
Cloudformation must upload zip for lambdas, so we want it to create an empty zip. But it scans the dependencies from the requirements.txt file in the directory the same as the one of template.yaml. And we want it to upload something empty. Hence it must be in another folder.
To solve it, I copy the template.yaml to an empty directory and add empty requirements.txt file. After that, we can run sam build and sam deploy as usuall. Notice that we must pass it LambdaCodeUriBucket and LambdaCodeUriKey:
#create "empty" environment for the template to be built in
mkdir -p .build/empty_template
cp template.yaml .build/empty_template
pushd .build/empty_template > /dev/null
touch requirements.txt
sam build --template template.yaml
sam deploy \
--template-file .aws-sam/build/template.yaml \
--capabilities "CAPABILITY_IAM" \
--region $region \
--s3-bucket $s3_bucket \
--s3-prefix $s3_prefix \
--stack-name $stack_name \
--parameter-overrides LambdaCodeUriBucket=$s3_bucket LambdaCodeUriKey=$s3_lambda_zip_suffix
popd > /dev/null
Notice that although we copied the template.yaml and called sam build on the new one, we already uploaded to s3 the zip file.
Important thing you must do is specify . as the CodeUri for your lambdas. Because they now use the "empty zip".
In the future, we will be able to do:
InlineCode: |
def handler(event, context):
pass
And not specify folder ..
But, currently sam doesn't support inline code for python3.8 hence we use .. Anyway you will have to move it to seperate folder to remove it's dependencies.
I am running the AWS DevSecOps project present here:
In the StaticCodeAnalysis stage of the pipeline I am getting AWS Lambda function failed.
On checking the log the error is:
"Unable to import module cfn_validate_lambda: No module named cfn_validate_lambda".
I checked that the S3 bucket that has the python code Zip and also ensured that the zip file has Public in the permissions.
Please let me know how to resolve this.
Thanks.
You have to package and zip the dependencies carefully...
The problem lies in the packaging hierarchy. After you install the dependencies in a directory, zip the lambda function as follows (in the example below, lambda_function is the name of my function)
Try this:
pip install requests -t .
zip -r9 lambda_function.zip .
zip -g lambda_function.zip lambda_function.py
I am trying to load data from Netezza to AWS. I want to add spark.executor.extraClassPath and spark.driver.extraClassPath in spark-defaults.conf in bootstrap action. Please find below updconf.sh(Bootstrap action)
#!/bin/bash
sudo aws s3 cp s3://my-bucket/nzjdbc.jar /usr/lib/sqoop/lib/ --sse aws:kms --sse-kms-key-id 'xxxxxxxxxxxxxxxxxxxxxx'
echo "updating spark-defaults.conf"
sudo chmod 777 /home/hadoop/spark/conf/spark-defaults.conf
sudo echo >> /home/hadoop/spark/conf/spark-defaults.conf
driverstr='/usr/lib/sqoop/lib/nzjdbc.jar'
sudo echo "export spark.executor.extraClassPath=`echo $driverstr`">>/home/hadoop/spark/conf/spark-defaults.conf
sudo echo "export spark.driver.extraClassPath=`echo $driverstr`">>/home/hadoop/spark/conf/spark-defaults.conf
But I am getting '/etc/spark/conf/spark-defaults.conf: No such file or directory' error.
How can I do this?
There are several ways to achieve this. Simplest one is as below:
Create a file called myConfig.json
'Configurations': [
{
'Classification': 'spark-defaults',
"Properties":
{
"spark.executor.extraClassPath": "value for executorsclasspath",
"spark.driver.extraClassPath": "value for driver classpath",
"spark.yarn.maxAppAttempts": "...",
....
....//other properties as you need.
}
},
]
Upload the file to S3 bucket.
While creating the cluster from the console, just provide the location for the file in S3 in "Edit software settings".
Provide other parameters as you need and create cluster.
Moreover, you can also use any of the AWS SDK like CLI or Python and provide these config parameters.
Reference : http://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-spark-configure.html
I'm trying to get casperjs to work with my AWS Lambda function.
{
"errorMessage": "Cannot find module 'casper'",
"errorType": "Error",
"stackTrace": [
"Function.Module._load (module.js:276:25)",
"Module.require (module.js:353:17)",
"require (internal/module.js:12:17)",
"Object.<anonymous> (/var/task/index.js:3:14)",
"Module._compile (module.js:409:26)",
"Object.Module._extensions..js (module.js:416:10)",
"Module.load (module.js:343:32)",
"Function.Module._load (module.js:300:12)",
"Module.require (module.js:353:17)"
]
}
I keep getting this error where Lambda can't detect casperjs. I uploaded my zip file into Lambda, and installed the casperjs modules into my directory before I zipped the files up.
My package.json file says I have casperjs installed.
{
"name": "lambda",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"keywords": [],
"author": "",
"license": "ISC",
"dependencies": {
"casperjs": "^1.1.3",
}
}
Would anyone know what I'm doing wrong? Thanks.
Since CasperJs relies on PhantomJs, you can set it up very similarly to this repo: https://github.com/TylerPachal/lambda-node-phantom.
The main difference being that you need to add and target CasperJs and you need to make sure that CasperJs can find and load PhantomJs.
Create a node_modules directory in your package directory.
Add a dependency for CasperJs to the packages.json file:
"dependencies": {
"casperjs": "latest"
}
In Terminal, navigate to your package directory and run 'npm update' to add the CasperJs package to the node_modules directory.
Assuming that you want to run CasperJs with the 'test' argument, the index.js file will need to be changed to look like this:
var childProcess = require('child_process');
var path = require('path');
exports.handler = function(event, context) {
// Set the path as described here: https://aws.amazon.com/blogs/compute/running-executables-in-aws-lambda/
process.env['PATH'] = process.env['PATH'] + ':' + process.env['LAMBDA_TASK_ROOT'];
// Set the path to casperjs
var casperPath = path.join(__dirname, 'node_modules/casperjs/bin/casperjs');
// Arguments for the casper script
var processArgs = [
'test',
path.join(__dirname, 'casper_test_file.js')
];
// Launch the child process
childProcess.execFile(casperPath, processArgs, function(error, stdout, stderr) {
if (error) {
context.fail(error);
return;
}
if (stderr) {
context.fail(error);
return;
}
context.succeed(stdout);
});
}
If you don't want to run CasperJs with the 'test' argument, just remove it from the arguments list.
The PhantomJs binary in the root directory of your package needs to be renamed to phantomjs, so that CasperJs can find it. If you would like to get a new version of PhantomJs, you can get one here: https://bitbucket.org/ariya/phantomjs/downloads. Make sure to download a linux-x86_64.tar.bz2 type so that it can run in Lambda. Once downloaded, just pull a new binary out of the bin directory and place it in your root package directory.
In order for Lambda to have permission to access all the files, it's easiest to zip the package in a Unix-like operating system. Make sure that all the files in the package have read and execute permissions. From within the package directory: chmod -R o+rx *. Then zip it up with: zip -r my_package.zip *.
Upload the zipped package to your Lambda function.
According to Casper.js Docs, it is not a actually Node Module. So you cannot require it in Package.json and zip it up with node modules. You will need to find how to install it on the lambda instance or find a an actual node module that does what you want. I suspect installing casper on lambda might not be possible, but that's just my gut.
Warning
While CasperJS is installable via npm, it is not a NodeJS module and will not work with NodeJS out of the box. You cannot load casper by using require(‘casperjs’) in node. Note that CasperJS is not capable of using a vast majority of NodeJS modules out there. Experiment and use your best judgement.
http://docs.casperjs.org/en/latest/installation.html
I'm trying to create a lambda function which takes apache log files from s3 bucket parses them into JSON documents and adds them to ES, as recommended in the following link :
https://github.com/awslabs/amazon-elasticsearch-lambda-samples
but I'm constantly facing the following error :
{
"errorMessage": "Cannot find module 'byline'",
"errorType": "Error",
"stackTrace": [
"Object.<anonymous> (/var/task/index.js:19:18)",
"Module._compile (module.js:409:26)",
"Object.Module._extensions..js (module.js:416:10)",
"Module.load (module.js:343:32)",
"Function.Module._load (module.js:300:12)",
"Module.require (module.js:353:17)”
]
}
Kindly, recommend a solution for this.
Apparently you aren't including the byline package required by the Lambda function. You have to run npm install locally and package up your source code and all dependencies into a zip file, and upload that to Lambda. Lambda will not run npm install for you and it expects all dependencies to be uploaded. This is documented here.
Try
npm -s install --production --prefix <folder>
then zip it and upload it.