I have a lambda function which basically converts 'CSV files' into 'JSON files',
the problem is depending on the file the execution can take 5 sec or 400 sec maybe more,
Do you think it's, be an excellent solution to use lambda for this case, configuring the timeout for 10 min or something really high?
The maximum runtime for a Lambda function is 5 minutes (300s). So if you expect your runtimes might exceed this, Lambda is not a suitable technology to use. An AWS product like Batch or Fargate ECS might be more suitable.
After a meeting with an AWS technical advisor, i had this solutions :
AWS batch
AWS Glue
the AWS GLUE might be a better solution, it's an ETL and it was introduced by AWS for this kind of problem.
So the solution will be an AWS LAMBDA function which calls the AWS GLUE to transform a file from AWS S3, i'll call the LAMBDA through AWS API GATEWAY.
Related
I am looking to trigger code every 1 Hour in AWS.
The code should: Parse through a list of zip codes, fetch data for each of the zip codes, store that data somewhere in AWS.
Is there a specific AWS service would I use for parsing through the list of zip codes and call the api for each zip code? Would this be Lambda?
How could I schedule this service to run every X hours? Do I have to use another AWS Service to call my Lambda function (assuming that's the right answer to #1)?
Which AWS service could I use to store this data?
I tried looking up different approaches and services in AWS. I found I could write serverless code in Lambda which made me think it would be the answer to my first question. Then I tried to look into how that could be ran every x time, but that's where I was struggling to know if I could still use Lambda for that. Then knowing where my options were to store the data. I saw that Glue may be an option, but wasn't sure.
Yes, you can use Lambda to run your code (as long as the total run time is less than 15 minutes).
You can use Amazon EventBridge Scheduler to trigger the Lambda every 1 hour.
Which AWS service could I use to store this data?
That depends on the format of the data and how you will subsequently use it. Some options are
Amazon DynamoDB for key-value, noSQL data
Amazon Aurora for relational data
Amazon S3 for object storage
If you choose S3, you can still do SQL-like queries on the data using Amazon Athena
I am invoking an AWS Lambda function locally using aws-sam cli command and I have set the Timeout property to 900 seconds but still it shows function timeout error. However, when I was invoking this function in lambda handler in AWS Console these 900 seconds were enough for the inferencing.
Please help me figure out a solution for this issue and what is the maximum limit I can go for Timeout?
AWS Lambda functions (as at July 2021) can only run for a maximum of 15 minutes (which is 900 seconds).
Some people do 'interesting' things like:
Call another Lambda function to continue the work, or
Use AWS Step Functions to orchestrate multiple AWS Lambda functions
However, it would appear that your use-case is Machine Learning, which does not like to have operations stopped in the middle of processing. Therefore, AWS Lambda is not suitable for your use-case.
Instead, I would recommend using Amazon EC2 spot instances, which will likely be lower-cost for your use-case. While spot instances might occasionally be terminated, your use-case can probably handle the need to re-run some processing if this happens.
I'd like to attach a Lambda function to a topic in a AWS MSK Kafka cluster, but it is not possible yet, from what I understood in AWS docs.
So I thought I could have a Lambda function that will run in a interval basis, using a CloudWatch event to trigger it each minute.
Another option is to run a small ec2 unit to run the client consumer.
I'm not sure if that's the cheapest solution. So what are the most cost effective solutions we could use to implement a solution that will work like a Lambda to SQS connector?
You can now use an MSK topic as an event source for AWS lambdas (as of last week)! This service comes at no additional cost per the release statement amazon sent out!
Once you create your lambda function, you can add MSK as a trigger. Make sure you attach the appropriate policies (as outlined in the article referenced below).
Here are a few relevant resources.
https://aws.amazon.com/blogs/compute/using-amazon-msk-as-an-event-source-for-aws-lambda/
https://docs.aws.amazon.com/lambda/latest/dg/with-msk.html
I have a batch job that I need to run on AWS. I'm wondering what's the best service to use. The job needs to run once a day, so I think that naturally AWS Lambda with a CloudWatch Rule triggering it would do it. However, I'm starting to think that AWS Lambda is thought to be used as a service to handle requests. This AWS official library to integrate Spring-Boot is very oriented to handle HTTP requests, and when creating a lambda via AWS Console, only test cases that send an input to the lambda can be written.
Then, is this a use case for AWS Lambda? Also, these functions can run up to 15 minutes. What should I use if my job needs to run longer?
The purpose of Lambda, as compared to AWS EC2, is to simplify building smaller, on-demand applications that are responsive to events and new information.
If your batch is running within a limit of 15 minutes then you can go with a lambda function.
But if you want batch processing to be done, you should check AWS batch.
Here is nice article which demonstrates the usage of AWS batch.
If you are already using some batch framework like spring-batch, you can also take a look at ECS scheduled task with Fargate.
With ECS Fargate you can launch and stop container services that you need to run only at certain times.
Here are some related articles on Fargate event and scheduled task and Scheduled Tasks.
If you're confident that your function will only run at maximum of 15mins, AWS Lambda could be the solution. Here are the AWS Lambda limits that could help you decide on that.
Also note that lambda has cold start, it's when it will run slower at first but will eventually pick up the pace. Here are some good reads about it that could help you decide on the lambda direction, but feel free to check on any articles that could better explain at your disposal.
This one shows a brief lists that you would like to consider and the factors affecting it.
This one might have a deeper explanation of the cold start with regards to how it works internally.
What should I use if my job needs to run longer?
Depending on your infrastructure, you could maybe explore Scheduled Tasks
I have more than 30GB file stored in s3,and I want to write an Lambda function which will access that file, parse it and then run some algorithm on the same.
I am not sure if my lambda function can take that big file and work on it as Max execution time for Lambda function is 300 sec(5 min).
I found AWS S3 feature regarding faster acceleration, but will it help?
Considering the scenario other than lambda function can any one suggest any other service to host my code as micro service and parse the file?
Thanks in Advance
It is totally based on the processing requirements and frequency of processing.
You can use Amazon EMR for parsing the file and run the algorithm, and based on the requirement you can terminate the cluster or keep it alive for frequent processing. https://aws.amazon.com/emr/getting-started/
You can try using Amazon Athena (Recently launched) service, that will help you for parsing and processing files stored in S3. The infrastructure need will be taken care by Amazon. http://docs.aws.amazon.com/athena/latest/ug/getting-started.html
For Complex Processing flow requirements, you can use combinations of AWS services like AWS DataPipeline - for managing the flow and AWS EMR or EC2 - to run the processing task.https://aws.amazon.com/datapipeline/
Hope this helps, thanks