How can I update my AWS Lambda function from VSCode? - amazon-web-services

So I have an AWS Lambda function written in NodeJS, but I am tired of coding in the AWS Console or having to manually zip my code in my VSCode to manually upload it in he AWS Console.
I know that I can update my function with aws lambda update-function-code --function-name myFunction --zip-file "fileb://myZipFile". But how can I zip it and launch this command every time I save my work in VSCode ?
Also, I am on Windows.

You can't do this without some additional work.
A few options are:
use the Run on Save VS Code extension and configure a custom command to run when a file is saved
create a SAM project and install the AWS Toolkit for VS Code extension to provide deployment assistance
create a package.json that includes a script for zip/deployment and use the NPM extension for VS Code to execute the deploy script
build a CI/CD solution: use VS Code to commit and push your code, then the pipeline takes over and deploys it
use a shell script, or a Makefile with a target, that zips and deploys and then simply execute it, manually or otherwise, in the VS Code in-built terminal

I use a script with below and run it when need to update.
echo "Building zip file"
zip -rq testfunction.zip testfunctionfolder/
echo "update Lambda function"
FUNCTION_ARN=$(aws lambda update-function-code \
--function-name testfunction \
--zip-file fileb://testfunction.zip \
--query 'FunctionArn' \
--output text)
echo "Lambda function updated with ARN ${FUNCTION_ARN}"

Related

Surpress opening of text editor when using aws lambda update-function-code

When running
aws lambda update-function-code --function-name my-function --zip-file fileb://my-zip-file.zip"
it opens an editor with the details upon completion. How can I prevent this, or how can I add a command to automatically close it? Can I pipe :q into my command or something similar?
Feature you are looking for is called AWS CLI pager and there is several ways how to suppres it
Disable for single command
Use --no-cli-pager option
aws lambda update-function-code --function-name my-function --zip-file fileb://my-zip-file.zip --no-cli-pager
Disable permanently
Using environment variable
First option to disable aws cli pager permanently is to set AWS_PAGER to nothing in shell initialization file (.bashrc, .zshrc, ...)
export AWS_PAGER=""
Using AWS CLI config
Second option is to use CLI configuration. Then you can have different settings for different AWS_PROFILE
aws configure set cli_pager ""
Config example
[default]
...
cli_pager =
[admin]
...
cli_pager = less

How can I download/pull lambda code to a local machine from command line?

I am using the sam deploy command with the AWS SAM command line tool to deploy.
Now I made some changes with the web IDE in the AWS Console.
How can I pull the changes to the local machine, so that the next sam deploy command won't override them? (I am looking for something similar to a git pull I guess)
To do this you will need to use the AWS CLI, the start of this process will require you to use the get-function function in the AWS CLI.
This will return a pre signed URL in the Code > Location structure, if you then download this (using a CLI tool such as curl) you can then download a zip file containing the contents of the Lambda function.
The expected function would look similar to the below
curl $(aws lambda get-function --function-name $FUNCTION_NAME --output text --query "Code.[Location]")
You should have a single source of truth for your source code. And that should really be your source control repository (Git). If you make changes to your source code in the web IDE then you should copy those changes into your Git repo.
To your original question, to download a Lambda function's source code from the command line, you would use the aws lambda get-function command to download information about the function. Part of the information included in the response is a URL to download the function's deployment package, which is valid for 10 minutes. Then you could download the deployment package at that URL using something like curl.

How to export serverless cloudformation output variables to a file or a task runner?

I'm using serverless.yml to create a couple services in AWS cloudformation, specifically: cognitoUserPool and UserPoolClient.
Both of these creations will return IDs that I will use on my flat html files with the cognito library to connect to amazon cognito, so, since I am serving flat files from S3, I need these values to be coded inside the files.
Now I'm looking for a way of automating this, perhaps leaving a placeholder in the files and then running them through a preprocessor that changes the placeholders with the output values before uploading them to S3.
Any ideas how this can be achieved? My first guess would be to export the output variables from serverless deploy and then use these values on a task runner.
To achieve this without using a Serverless plugin, add the follow to your package.json file:
"scripts": {
"sls:info": "sls info --verbose | tee ./.slsinfo",
}
This will create the file .slsinfo containing your serverless outputs (amongst other things). Run by calling npm run sls:info
You can then update package.json:
"scripts": {
"sls:deploy": "sls deploy && npm run sls:info",
"sls:info": "sls info --verbose | tee .slsinfo",
}
Now you can call npm run sls:deploy and it will deploy your service and add your outputs to .slsinfo file.
To use the info in .slsinfo the easiest way I have found is to use regex. Example below:
const slsinfo = require('fs').readFileSync('./.slsinfo', 'utf8');
function getOutput(output) {
return slsinfo.match(new RegExp('('+output+': )((.?)+)(\\n)'))[2];
}
Using the above method you can get your output as follow:
const var = getOutput('MyOutputName')
To get outputs from serverless you can use the serverless-stack-output plugin or you can deduce the stack name and use the aws command.
aws cloudformation describe-stacks --stack-name SERVICE-STAGE --query Stacks[0].Outputs
Replace SERVICE with your service name and STAGE with your stage. You should get a JSON object with the outputs from this command.
If you want to get just specific outputs, try:
aws cloudformation describe-stacks --stack-name SERVICE-STAGE --query 'Stacks[0].Outputs[?OutputKey==`OUTPUT_KEY`].OutputValue' --output text
Replace SERVICE, STAGE and OUTPUT_KEY with the values you want.
On Windows use (the quotes work differently):
aws cloudformation describe-stacks --stack-name SERVICE-STAGE --query Stacks[0].Outputs[?OutputKey==`OUTPUT_KEY`].OutputValue --output text
For more details on --query see https://docs.aws.amazon.com/cli/latest/userguide/controlling-output.html

AWS lambda function with python "errorMessage": "Unable to import module 'index'"

i am trying to make a post call from lambda function but not able to run the code on aws console but it it working properly on my system.
You need to install the dependencies in the folder where you have index.py then you need to zip the contents of the folder and upload the zip file to AWS Lambda.
Please note that you need to zip the contents of the folder, do not zip the folder itself.
On windows, you can install the packages in the folder using below command:
pip install package-name -t "/path/to/project-dir"
I had this error today, and this is the first result on Google, so I'll add my answer. In short, I had specified the handler incorrectly on the command line when I uploaded the function.
aws lambda create-function --function-name python-test-lambda --runtime python3.7 --role arn:aws:iam::123123123123:role/service-role/rolearn --handler lambda_function.lambda_handler --zip-file fileb://lambda_function.zip
ie this part was incorrect
--handler

How to deploy to AWS Lambda quickly, alternative to the manual upload?

I am getting started with writing an Alexa Skill. My skill requires uploading a .ZIP file as it includes the alexa-sdk dependency being stored in the node_modules folder.
Is there a more efficient way to upload a new version of my Lambda function and files from my local machine without zipping and manually uploading the same files over and over again? Some like git push or a different way to deploy via Terminal with a single command?
You can use the update-function-code CLI command.
Note that his operation must only be used on an existing Lambda function and cannot be used to update the function configuration.
To add to Khalid's answer, I recently created this rudimentary batch script to ease a particular lambda function's deployment. This example is for a NodeJS Lambda function which has it's dependencies located in the node_modules folder.
Prerequisits:
Have 7zip installed. Found here
Have it available in CMD (have it on system PATH variable) as explained here
Have your local aws-cli set up with valid credentials that have access to uploading to AWS Lambda.
rm -rf target
mkdir -p target
cp -r index.js package.json node_modules/ target/
pushd target
7z a zip_file_name.zip -r
popd
aws lambda update-function-code \
--function-name YOUR_FUNCTION_NAME \
--zip-file fileb://target/zip_file_name.zip \
--region us-east-1
My one-liner for bash:
zip -u f.zip f.py; aws lambda update-function-code --zip-file fileb://f.zip --function-name f