Is there a way to containerized a normal AWS Lambda function? - amazon-web-services

My AWS lambda functions have input from AWS SNS (Topic subscription) and output will go to CRUD in NoSQL Database (likewise MongoDB).
So currently I have the SNS & Lambda function setup in AWS Cloud and they are working fine. However, I would like to containerize the lambda function as well as the MongoDB database and host them on AWS EKS using Docker + Kubernetes service. (So the functions will be a Docker image)
I am totally new to this container thing and I searched online though I could not found any that mentions how to containerized AWS Lambda Functions.
Is this possible? If it is what are the ways to do it?
Thank you.

The docker environment for AWS lambda function already exist and it is lambci/lambda. So if you want to run/test your functions locally, this is the tool normally used for that:
A sandboxed local environment that replicates the live AWS Lambda environment almost identically – including installed software and libraries, file structure and permissions, environment variables, context objects and behaviors – even the user and running process are the same.
Since its open-sourced, you can also modify it if it does not suit your needs.

Lambda already uses Firecracker a microVM technology. So, not really sure why it's required to create a container out of Lambda.
The beauty of Lambda/Serverless is to simply write the function code and forget about the rest. If it's all about more control, then look at Knative which runs on top of K8S.

Related

Run AWS Lambda inside Docker in local

I'm new to AWS and for learning purpose I created a free AWS account. I don't want to install all dependencies, packages and configure them with my test account in my pc until I learn them well. So I planned to create a docker image so I can do configurations later in my pc. But I can't find any good example how to set up docker image for AWS Lambda. Can you please help me to set up docker image?
p.s
I'm using NodeJs
Check out https://github.com/localstack/localstack - A fully functional local AWS cloud stack (Lambda as well).
The solution will depend on language you are going to use for lambdas.
Try some tutorials, i.e. the next descibes how to simulate lambda for python:
https://aws.amazon.com/premiumsupport/knowledge-center/lambda-layer-simulated-docker/
Recent AWS blog describes how to do it:
How do I create a Lambda layer using a simulated Lambda environment with Docker?
Basically you can run already made docker image for that:
https://hub.docker.com/r/lambci/lambda/
This is the same docker image used by AWS's SAM (Serverless Application Model) when you test your lambda function locally. Thus this is the closest you can get to the real lambda environment.

AWS Lambda Function to Container

I have a deployed AWS Lambda function that I would like to run in a container. There are no restrictions on where this container is run (AWS Fargate, EC2, My Local Machine, etc. are perfect) There are no restrictions on which containerization software is used, though docker is preferred.
I have access to everything about the lambda function returned from this Lambda API function. I do not have access to the raw code.
I looked into AWS SAM, though it appears that i need a copy of the raw code in order to create a container.
Is this possible to do? If so, how can I do it?

Run golang lambda function locally

I'm trying to develop a lambda that has to work with S3 and dynamoDB.
The thing is that because I am not familiar with the SDK of aws for go I will have lots of tests and tries.
Each time I will change the code is another time I have to compile the project and upload it to aws.
Is there any way to do it locally? pass some kind of configuration that lets me call the services of aws locally, from my computer?
Thanks!
This has to do mostly with golang, other languages like python can run directly on the aws lambda function page, and node has cloud9 support.
You can use the lambci docker image(s) to execute your code locally using the same Lambda runtimes that are used on AWS.
https://github.com/lambci/docker-lambda
You can also run dynamo DB locally in another container as well
https://hub.docker.com/r/amazon/dynamodb-local/
To simulate credentials/roles that would be available on Lambda, just pass in your Api creds VIA environment variables. ( for s3 access )
Cheers
-JH
You could use this aws-lambda-go-test module which can run lambda locally and can be used to test the actual response from lambda
full disclosure I forked and upgraded this module

AWS Docker Container for Local Development

I'm using AWS Dynamo DB, Lambda, ElastichSearch, ElasticCache(Redis). I want to bring all these services offline for local development. I wonder's is there a Docker container for all these services?
Perhaps! There's a (set of) Docker containers that claim they provide local implementations of popular AWS services: localstack.
Edit: For lambda specific things there's also Docker Lambda!
I've never actually used these Docker containers, but have wanted to. (But my development needs try to use commodity services instead of vendor specific. So MongoDB instead of DynoDB, and sure we might use ElastiCache to run our Redis cluster, but that just means in local development we can use Redis directly. Having said that, that's not everyone's cup of tea / maybe not possible for some things..)
We use docker for most AWS Services for local development except for AWS Lambda.
We use the service containers as below:
MySQL for RDS MySQL
Redis for ElastiCache
ElasticSearch for AWS ElasticSearch
fake-s3 for S3
ActiveMQ for mocking SQS and SNS topics (The implementation for SNS topics is a bit ugly, but abstracted out in one place with some if-else statements)
Most of our services make use docker-compose to start the dependent containers. We've included these containers on our build server too to run our integration tests.
In addition, most of the containers we are using needed some modifications to the original Docker file. So we had to push our changes to our own Docker repository, which we maintain using ECS.
For Lambda, we do not use a docker container as we start our own HTTP server locally to test and invoke the lambda function.
Been using this setup for over a year without any issues. You may also want to refer to this blog from IFTTT to get some more ideas around DNS resolution and how to make this effort better.

Accessing files in EC2 from Lambda

I have few EC2 servers in AWS. Whenever the disk space exceeds a limit, i want to delete some files (may be logs folder) in EC2 instance automatically. I am planning to use Lambda and cloudwatch for this. Can i use Lambda to interact with EC2. If not possible, what is the alternate approach to achieve this functionality.
This is not an appropriate use-case for an AWS Lambda function.
AWS Lambda is suitable for tasks where compute is required in response to an event. Your use-case, however, is to manipulate information on an EC2 instance, which does not need cloud compute.
You could run a script on each each computer, triggered by a Scheduled Task.
Alternatively, you could use the Systems Manager Run Command (also known as the EC2 Run Command), which allows you to run commands on multiple Amazon EC2 instances and view the results. This could be used to trigger a local script, or it could pass the whole command to run (including the script). It is purpose-built for the type of task you describe.
AWS Lambda has access to your instances if they are available in the internet. If they are not available in the internet, it is possible to give access to AWS lambda using a NAT or instance Gateway in your VPC.
The problem is: access to your instance does not means access to the instances filesystems. To delete the files from Lambda you can use two alternatives:
Configure a network filesystem service in your instances an connect
to this services in your lambda function. Using windows you would
just "share" your disks, but in that case you would use some SMB
library in your lambda code, that "I think" did not have native SMB
support. Just keep in mind that your security guy will scream out
loud when you propose this alternative.
Create a "agent" in your EC2 instances and keep it running as a
Windows Service and call this agent from your lambda function. In
that case, the lambda will start the execution of the agent that
will be responsible for the file deletion.
Another option, is to follow Ramesh's suggestion and create a Powershell script and configure a cron job. To be easy, you can create a Image with this Powershell script and use the image to initialize each instance. The same solution would be applicable to "the agent" solution in the lambda alternantives.
I think that, in any case, you will need to change something in your 150 servers. Using a customized image can help you to simplify this a little bit, but you will not get a solution without some changes.
According to the following thread, you cannot access files inside a EC2 VM unless you are exposing files to the public using different methodology.
AWS Forum
Quoting from the forum
If you are talking about the underlying EC2 instance, answer is No, you cannot access those files.
However as a solution for your problem, you can used scheduled job to cleanup your files depending your usage. You can use a service or cron job.