My Lambda is invoking other lambda, First lambda will be inserting the data into DynamoDB from S3 and second lambda get invoked. The second lambda will read the data from DynamoDb and create excel file in S3.
While inserting 100's records it worked good but while inserting 1500 + records, the first lambda is inserting the data correctly into DynamoDb and invoking the second lambda but the second lambda creates two files, one with correct number of records which expected and the other one is duplicated with less number than expected.
I tried increasing the time out for both lambda but did not work.
While i am not sure why your code will generate duplicate excels, i want to give some insights on application/architecture perspectives:
Why you need to insert something to ddb first, then use another process to read data back from ddb to generate excels? It seems to me the process can be done in a single lambda, that the lambda function read data from source S3 into local tmp directory or memory of lambda, and then insert data the ddb, and then use the local data to generate the excels in target S3.
Related
I am interested in doing automated real-time data processing on AWS using Lambda and I am not certain about how I can trigger my Lambda function. My data processing code involves taking multiple files and concatenating them into a single data frame after performing calculations on each file. Since files are uploaded simultaneously onto S3 and files are dependent on each other, I would like the Lambda to be only triggered when all files are uploaded.
Current Approaches/Attempts:
-I am considering an S3 trigger, but my concern is that an S3 Trigger will result in an error in the case where a single file upload triggers the Lambda to start. An alternate option would be adding a wait time but that is not preferred to limit the computation resources used.
-A scheduled trigger using Cloudwatch/EventBridge, but this would not be real-time processing.
-SNS trigger, but I am not certain if the message can be automated without knowing the completion in file uploads.
Any suggestion is appreciated! Thank you!
If you really cannot do it with a scheduled function, the best option is to trigger a Lambda function when an object is created.
The tricky bit is that it will fire your function on each object upload. So you either can identify the "last part", e.g., based on some meta data, or you will need to store and track the state of all uploads, e.g. in a DynamoDB, and do the actual processing only when a batch is complete.
Best, Stefan
Your file coming in parts might be named as -
filename_part1.ext
filename_part2.ext
If any of your systems is generating those files, then use the system to generate a final dummy blank file name as -
filename.final
Since in your S3 event trigger you can use a suffix to generate an event, use .final extension to invoke lambda, and process records.
In an alternative approach, if you do not have access to the server putting objects to your s3 bucket, then with each PUT operation in your s3 bucket, invoke the lambda and insert an entry in dynamoDB.
You need to put a unique entry per file (not file parts) in dynamo with -
filename and last_part_recieved_time
The last_part_recieved_time keeps getting updated till you keep getting the file parts.
Now, this table can be looked up by a cron lambda invocation which checks if the time skew (time difference between SYSTIME of lambda invocation and dynamoDB entry - last_part_recieved_time) is enough to process the records.
I will still prefer to go with the first approach as the second one still has a chance for error.
Since you want this to be as real time as possible, perhaps you could just perform your logic every single time a file is uploaded, updating the version of the output as new files are added, and iterating through an S3 prefix per grouping of files, like in this other SO answer.
In terms of the architecture, you could add in an SQS queue or two to make this more resilient. An S3 Put Event can trigger an SQS message, which can trigger a Lambda function, and you can have error handling logic in the Lambda function that puts that event in a secondary queue with a visibility timeout (sort of like a backoff strategy) or back in the same queue for retries.
I have one AWS lambda that kicks off (SNS events) multiple lambdas which in turn kick off (SNS events) multiple lambdas. All of these lambdas are writing files to S3 and I need to know when all files have been written. There will be another lambda which will send a final SNS message containing all references to the files produced. The amount of fan-out in the second set of lambdas is unknown as depends on the first fan-out.
If this was a single fan-out I would know how many files to be looking for but as it is a 2 step fan-out I am unsure as to how to monitor for all files. Has anybody dealt with this before? Thanks.
I would create a DynamoDB table for tracking this process. Create a single record in the table when the initial Lambda function kicks off, with a unique ID like a UUID or something if you don't already have a unique ID for this process. Also add that unique ID to the SNS messages, this will be the key used for all updates performed by the other processes. Also add a splitters_invoked to the record when it is created by the first process with the number of second level splitter functions it is invoking, and a splitters_complete property set to 0.
Inside the second level splitter functions you can use the DynamoDB feature Conditional Updates to update the DynamoDB record with the list of files created with their S3 locations. The second level splitter functions will also use the DynamoDB Atomic Counters feature to update the splitters_complete count just before they exit.
At the "process" level, each of those invocations will perform another Conditional Update to the DynamoDB record flagging the individual file they just processed as complete.
Finally, configure DynamoDB streams to trigger another Lambda function. This lambda function will check two conditions: splitters_complete is equal to splitters_invoked, and all files in the file list are marked as "completed". Then it will know that it can perform the final step in your process.
Alternatively, if you don't want to keep the list of S3 file locations in the DynamoDB table, simply use atomic counters for that as well, one counter for the total number of files created by the second level splitters, and another counter for the file processing functions.
I'm currently playing around with AWS for some serverless CSV processing. Decent familiarity with EC2 and Dynamo. I'm sure there is a better way to structure this, and I've not found an efficient way to store the data. Any architecture suggestions would be much appreciated.
This flow will take in a CSV uploaded to S3, process all the rows of tuples and output a new CSV of processed data to S3.
What's the 1) optimal architecture and 2) optimal place to store the data before the queue is complete until the CSV can be built
Data flow and service architecture:
CSV (contains tuples) (S3) -> CSV processing (Lambda) -> Queue (SNS) -> Queue Processing (Lambda) -> ????? temporary storage for queue items that have been processed before they get written to CSV ???? (what to use here?) -> CSV building (Lambda) -> CSV storage (S3)
Clever ideas appreciated.
I believe you are over complicating matters
s3 can trigger invoke a lambda function when events occur. This is directly set up in the s3 buckets event notifications
So use this to make a converted version of the CSV in another bucket
Amazon have an example of how to do this sort of thing here
http://docs.aws.amazon.com/lambda/latest/dg/with-s3-example.html
Update (reply to this comment):
it doesn't parallelize anything
You can divide the task equally if you have a good idea of how many tuples can be processed by a single Lambda within its time limit.
For example, given the following info...
original CSV contains 50,000 tuples
a single Lambda can process 5000 tuples within the time limit.
You can then do 10 parallel asynchronous invocations of the processor Lambda with each of them working with a different offset.
Original answer:
You can make it work with two Lambdas:
Listener
S3-triggered Lambda whose only job is to pass the s3 path of the new uploaded CSV to the Processor Lambda.
Processor
a Lambda that is triggered by the Listener. It will require the s3 path and the offset as parameters (where offset is the row of the CSV that it should start processing.
This Lambda performs the actual processing of your CSV rows. It should keep track of which row it's currently processing and just before the Lambda time limit is reached, it will stop and invoke itself with the same s3 path but with a new offset.
So, basically, it's a recursive Lambda that invokes itself until all CSV rows are processed.
To check for the remaining time, you can use the context.getRemainingTimeInMillis() method (NodeJS) in a while or for loop in your handler.
Considering the fact that there is no data pipeline available in Singapore region, are there any alternatives available to efficiently push csv data to dynamodb?
If it was me, I would setup an s3 event notification on a bucket that fires a lambda function each time a CSV file was dropped into it.
The Notification would let Lambda know that a new file was available and a lambda function would be responsible for loading the data into dynamodb.
This would work better (because of the limits of lambda) if the CSV files were not huge, so they could be processed in a reasonable amount of time, and the bonus is the only worked that would need to be done once it was working would be to simply drop the new files into the right bucket - no server required.
Here is a github repository that has a CSV->Dynamodb loader written in java - it might help get you started.
I am seeking advice on what's the best way to design this -
Use Case
I want to put multiple files into S3. Once all files are successfully saved, I want to trigger a lambda function to do some other work.
Naive Approach
The way I am approaching this is by saving a record in Dynamo that contains a unique identifier and the total number of records I will be uploading along with the keys that should exist in S3.
A basic implementation would be to take my existing lambda function which is invoked anytime my S3 bucket is written into, and have it check manually whether all the other files been saved.
The Lambda function would know (look in Dynamo to determine what we're looking for) and query S3 to see if the other files are in. If so, use SNS to trigger my other lambda that will do the other work.
Edit: Another approach is have my client program that puts the files in S3 be responsible for directly invoking the other lambda function, since technically it knows when all the files have been uploaded. The issue with this approach is that I do not want this to be the responsibility of the client program... I want the client program to not care. As soon as it has uploaded the files, it should be able to just exit out.
Thoughts
I don't think this is a good idea. Mainly because Lambda functions should be lightweight, and polling the database from within the Lambda function to get the S3 keys of all the uploaded files and then checking in S3 if they are there - doing this each time seems ghetto and very repetitive.
What's the better approach? I was thinking something like using SWF but am not sure if that's overkill for my solution or if it will even let me do what I want. The documentation doesn't show real "examples" either. It's just a discussion without much of a step by step guide (perhaps I'm looking in the wrong spot).
Edit In response to mbaird's suggestions below-
Option 1 (SNS) This is what I will go with. It's simple and doesn't really violate the Single Responsibility Principal. That is, the client uploads the files and sends a notification (via SNS) that its work is done.
Option 2 (Dynamo streams) So this is essentially another "implementation" of Option 1. The client makes a service call, which in this case, results in a table update vs. a SNS notification (Option 1). This update would trigger the Lambda function, as opposed to notification. Not a bad solution, but I prefer using SNS for communication rather than relying on a database's capability (in this case Dynamo streams) to call a Lambda function.
In any case, I'm using AWS technologies and have coupling with their offering (Lambda functions, SNS, etc.) but I feel relying on something like Dynamo streams is making it an even tighter coupling. Not really a huge concern for my use case but still feels dirty ;D
Option 3 with S3 triggers My concern here is the possibility of race conditions. For example, if multiple files are being uploaded by the client simultaneously (think of several async uploads fired off at once with varying file sizes), what if two files happen to finish uploading at around the same time, and two or more Lambda functions (or whatever implementations we use) query Dynamo and gets back N as the completed uploads (instead of N and N+1)? Now even though the final result should be N+2, each one would add 1 to N. Nooooooooooo!
So Option 1 wins.
If you don't want the client program responsible for invoking the Lambda function directly, then would it be OK if it did something a bit more generic?
Option 1: (SNS) What if it simply notified an SNS topic that it had completed a batch of S3 uploads? You could subscribe your Lambda function to that SNS topic.
Option 2: (DynamoDB Streams) What if it simply updated the DynamoDB record with something like an attribute record.allFilesUploaded = true. You could have your Lambda function trigger off the DynamoDB stream. Since you are already creating a DynamoDB record via the client, this seems like a very simple way to mark the batch of uploads as complete without having to code in knowledge about what needs to happen next. The Lambda function could then check the "allFilesUploaded" attribute instead of having to go to S3 for a file listing every time it is called.
Alternatively, don't insert the DynamoDB record until all files have finished uploading, then your Lambda function could just trigger off new records being created.
Option 3: (continuing to use S3 triggers) If the client program can't be changed from how it works today, then instead of listing all the S3 files and comparing them to the list in DynamoDB each time a new file appears, simply update the DynamoDB record via an atomic counter. Then compare the result value against the size of the file list. Once the values are the same you know all the files have been uploaded. The down side to this is that you need to provision enough capacity on your DynamoDB table to handle all the updates, which is going to increase your costs.
Also, I agree with you that SWF is overkill for this task.