Can we import our locally created lambda function to AWS console? - amazon-web-services

I have been working on AWS Lambda. For some reason, I had to create a Lambda function locally. I just want to know, can we import local lambda function to aws lambda console. If yes then please elaborate how can i achieve this?

It is pretty easy:
Write your lambda function with a lambda_handler
Create a requirements.txt
Install all the requirements in the same folder
Package it(Zip file).
Go to AWS Lambda --> Create function --> Fill all details --> Create function.
Under code section: upload from --> .zip file ---> (select your file) --> upload--> save
Modify the handler according to your function name.

Yes you can. Assume you create the Lambda function using the Java run-time API (com.amazonaws.services.lambda.runtime.RequestHandler) and you create a FAT JAR that contains all of the required dependencies. You can deploy the Lambda function (the JAR) using the AWS Management Console, as described in this AWS tutorial.
Creating an AWS Lambda function that detects images with Personal Protective Equipment

Related

Edit image file in S3 bucket using AWS Lambda

Some images which is already uploaded on AWS S3 bucket and of course there is a lot of image. I want to edit and replace those images and I want to do it on AWS server, Here I want to use aws lambda.
I already can do my job from my local pc. But it takes a very long time. So I want to do it on server.
Is it possible?
Unfortunately directly editing file in S3 is not supported Check out the thread. To overcome the situation, you need to download the file locally in server/local machine, then edit it and re-upload it again to s3 bucket. Also you can enable versions
For node js you can use Jimp
For java: ImageIO
For python: Pillow
or you can use any technology to edit it and later upload it using aws-sdk.
For lambda function you can use serverless framework - https://serverless.com/
I have made youtube videos long back. This is related to how get started with aws-lambda and serverless
https://www.youtube.com/watch?v=uXZCNnzSMkI
You can trigger a Lambda using the AWS SDK.
Write a Lambda to process a single image and deploy it.
Then locally use the AWS SDK to list the images in the bucket and invoke the Lambda (asynchronously) for each file using invoke. I would also save somewhere which files have been processed so you can continue if something fails.
Note that the default limit for Lambda is 1000 concurrent executions, so to avoid reaching the limit you can send messages to an SQS queue (which then triggers the Lambda) or just retry when invoke throws an error.

Serverless-ly Query External REST API from AWS and Store Results in S3?

Given a REST API, outside of my AWS environment, which can be queried for json data:
https://someExternalApi.com/?date=20190814
How can I setup a serverless job in AWS to hit the external endpoint on a periodic basis and store the results in S3?
I know that I can instantiate an EC2 instance and just setup a cron. But I am looking for a serverless solution, which seems to be more idiomatic.
Thank you in advance for your consideration and response.
Yes, you absolutely can do this, and probably in several different ways!
The pieces I would use would be:
CloudWatch Event using a cron-like schedule, which then triggers...
A lambda function (with the right IAM permissions) that calls the API using eg python requests or equivalent http library and then uses the AWS SDK to write the results to an S3 bucket of your choice:
An S3 bucket ready to receive!
This should be all you need to achieve what you want.
I'm going to skip the implementation details, as it is largely outside the scope of your question. As such, I'm going to assume your function already is written and targets nodeJS.
AWS can do this on its own, but to make it simpler, I'd recommend using Serverless. We're going to assume you're using this.
Assuming you're entirely new to serverless, the first thing you'll need to do is to create a handler:
serverless create --template "aws-nodejs" --path my-service
This creates a service based on the aws-nodejs template on the provided path. In there, you will find serverless.yml (the configuration for your function) and handler.js (the code itself).
Assuming your function is exported as crawlSomeExternalApi on the handler export (module.exports.crawlSomeExternalApi = () => {...}), the functions entry on your serverless file would look like this if you wanted to invoke it every 3 hours:
functions:
crawl:
handler: handler.crawlSomeExternalApi
events:
- schedule: rate(3 hours)
That's it! All you need now is to deploy it through serverless deploy -v
Below the hood, what this does is create a CloudWatch schedule entry on your function. An example of it can be found over on the documentation
First thing you need is a Lambda function. Implement your logic, of hitting the API and writing data to S3 or whatever, inside the Lambda function. Next thing, you need a schedule to periodically trigger your lambda function. Schedule expression can be used to trigger an event periodically either using a cron expression or a rate expression. The lambda function you created earlier should be configured as the target for this CloudWatch rule.
The resulting flow will be, CloudWatch invokes the lambda function whenever there's a trigger (depending on your CloudWatch rule). Lambda then performs your logic.

AWS Multiple Lambda Functions in one zip

I'm using CloudFormation to create lambda resources and I have several python scripts as well as one js script in a lambda-resources folder. Would it be okay to pass the same file location for every lambda function and just specify unique handlers? When my lambda function is created it looks like it only creates one lambda function.
Yes, this is definitely one way to accomplish what you're looking to do.
You'll need to create a zipped version of your lambda-resources folder and upload it via the Lambda service or even to S3, then reference it as the file location for each Lambda function.

Where libraries are stored for AWS Lambda Functions?

I'm a newbie in AWS Lambda functions. I used a script in AWS CLI in order to create an aws function in Node.js. This script has a config file called config.json. After function creation, I'm able to see the code on Lambda AWS Console and here comes my doubt. The code has this line:
var config = require('./config.json');
So, where this "./config.json" file is actually stored. Could I be able to edit the contents of config.json after deployment of lambda function?
Thanks in advance.
So, where this ./config.json file is actually stored?
It should be stored in the same directory as your Lambda handler function. They should be bundled in a zip file and deployed to AWS. If you didn't deploy it that way then that file doesn't currently exist.
If your Lambda function consists of multiple files you will have to bundle your files and deploy it to AWS as a zip file.
You cannot edit the source of external libraries/files via the AWS Lambda web console. You can only edit the source of the Lambda function handler via the web console.
Your files are placed into the directory specified in the environment variable LAMBDA_TASK_ROOT. You can read this via nodejs as process.env.LAMBDA_TASK_ROOT.
The code you deploy, including the config.json file are read-only, but if you do wish to modify files on the server, you may do so underneath /tmp. Mind, those changes will only be valid for that single container, for its lifecycle (4m30s - 4hrs). Lambda will auto-scale up and down between 0 and 100 containers, by default.
Global variables are also retained across invocations, so if you read config.json into a global variable, then modify that variable, those changes will persist throughout the lifecycle of the underlying container(s). This can be useful for caching information across invocations, for instance.

AWS Lambda and zip upload from S3

This feature is not clear to me about the benefits (I didn't find any good documentation):
Is it just faster in the case you reuse the same zip for many lambda functions because you upload only 1 time and you just give the S3 link URL to each lambda function?
If you use an S3 link, will all your lambda functions be updated with the latest code automatically when you re-upload the zip file, meaning is the zip file on S3 a "reference" to use at each call to a lambda function?
Thank you.
EDIT:
I have been asked "Why do you want the same code for multiple Lambda functions anyway?"
Because I use AWS Lambda with AWS API Gateway so I have 1 project with all my handlers which are actual "endpoints" for my RESTful API.
EDIT #2:
I confirm that uploading a modified version of the zip file on S3 doesn't change the existing lambda functions result.
If an AWS guy reads this message, that would be great to have a kind of batch update feature that updates a set of selected lambda functions with 1 zip file on S3 in 1 click (or even an "automatic update" feature that detects when the file has been updated ;-))
Let's say you have 50 handlers in 1 project, then you modify something global impacting all of them, currently you have to go through all your lambda functions and update the zip file manually...
The code is imported from the zip to Lambda. It is exactly the same as uploading the zip file through the Lambda console or API. However, if your Lambda function is big (they say >10MB), they recommend uploading to S3 and then using the S3 import functionality because that is more stable than directly uploading from the Lambda page. Other than that, there is no benefit.
So for question 1: no. Why do you want the same code for multiple Lambda functions anyway?
Question 2: If you overwrite the zip you will not update the Lambda function code.
To add to other people's use cases, having the ability to update a Lambda function from S3 is extremely useful within an automated deployment / CI process.
The instructions under New Deployment Options for AWS Lambda include a simple Lambda function that can be used to copy a ZIP file from S3 to Lambda itself, as well as instructions for triggering its execution when a new file is uploaded.
As an example of how easy this can make development and deployment, my current workflow is:
I update my Node lambda application on my local machine, and git commit it to a remote repository.
A Jenkins instance picks up the commit, pulls down the appropriate files, adds them into a ZIP file and uploads this to an S3 bucket.
The LambdaDeployment function then automatically deploys this new version for me, without me needing to even leave my development environment.
To answer what I think is the essence of your question, AWS allows you to use S3 as the origin for your Lambda zip file because sometimes uploading large files via your browser can timeout. Also, storing your code on S3 allows you to store it centrally, rather than on your computer and I'm sure there is a CodeCommit tie-in there as well.
Using the S3 method of uploading your code to Lambda also allows you to upload larger files (AWS has a 10MB limit when uploading via web browser).
#!/bin/bash
cd /your/workspace
#zips up the new code
zip -FSr yourzipfile.zip . -x *.git* *bin/\* *.zip
#Updates function code of lambda and pushes new zip file to s3bucket for cloudformation lambda:codeuri source
aws lambda update-function-code --function-name arn:aws:lambda:us-west-2:YOURID:function:YOURFUNCTIONNAME --zip-file file://yourzipfile.zip
Depends on aws-cli install and aws profile setup
aws --profile yourProfileName configure