SQS Trigger Lambda, with FileName in S3 for text extraction - amazon-web-services

I have a use case, I have a list of pdf files stored in S3 Bucket, I have listed them and push them to SQS for Text Extraction, Created one Lambda for processing those files by providing bucket information and TextExtraxtion Information of AWS.
The issue is, Lambda is getting Timeout, as SQS trigger multiple lambda instance of all files and all of them waiting for Text Extract Service.
Lambda to trigger one by one, for all SQS message(FileName) so that Timeout does not occur, As we do have a limit for accessing AWS TextExtract

Processing 100+ files is a time consuming task, I would suggest taking no more than 10 files per Lambda execution.
Use SQS with Lambda as an event source.
https://dzone.com/articles/amazon-sqs-as-an-event-source-to-aws-lambda-a-deep

Related

Lambda invocation on two SNS events at the sametime

I have a usecase where I need to read the two files which are in a different account, I will be receiving an SNS event with the filename and I need to create an EMR cluster from the Lambda only if two files are available in the other s3 bucket.
Currently I am writing dummy files to s3 bucket every time I receive a SNS event and then creating the EMR cluster only after ensuring that on the second SNS event that I have received, the first file is available in my accounts s3 bucket- This approach is working fine.
But I am unable to solve the issue of what really happens if we receive two files at the same time in the other s3 bucket and if we get two sns events around the same time, as each event thinks the other file hasn’t been arrived yet.
How would I solve this problem .

Is there a way to add delay to trigger a lambda from S3 upload?

I have a Lambda function which is triggered after put/post event of S3 bucket. This works fine if there is only one file uploaded to S3 bucket.
However, at times there could be multiple files uploaded which can take upto 7 minutes to complete the upload process. This triggers my lambda function multiple times which adds overhead of handling this from the code.
Is there any way to either trigger the lambda only once for the complete upload or add delay in the function and avoid multiple execution of Lambda function?
There is no specific interval when the files will be uploaded to S3 hence could not use scheduler.
Delay feature was added for Lambda that has Kinesis or DynamoDB Event Sources recently. But it's not supported for S3 events.
You can send events from S3 to SQS. Then your Lambda will consume SQS events. It consumes them in batch by default.
It seems Multi Part Upload is being used here from the client.
Maybe a duplicate of this? - AWS Lambda and Multipart Upload to/from S3
An alternative might be to have your Lambda function check for existence of all required files before moving on to the action you need to take. The Lambda function would still fire each time, but would exit quickly if not all files have been received yet.

Missing s3 events in AWS SQS

I have an AWS Lambda function that is supposed to be triggered by messages from Simple Queue Service SQS. This SQS is supposed to get a notification when new json file is written into my s3 bucket, or when existing json file in s3 bucket is overwritten. Event type for both cases is s3:ObjectCreated, and I see notification for both cases is my SQS.
Now, the problem is that pretty frequently there is a new file in s3 (or updated existing file in s3), but there is no corresponding message in sqs! So many files are missing and Lambda is not aware that those should be processed. In Lambda I print the whole content of received SQS payload into the log file, and then try to find those missed files with something like aws --profile aaa logs filter-log-events --log-group-name /aws/lambda/name --start-time 1554357600000 --end-time 1554396561982 --filter-pattern "missing_file_name_pattern" but can't find anything, which means that s3:objectCreated event was not generated for this missing file. Are there some conditions that prevents s3:objectCreated events for new/updated s3 files? Is there a way to fix it? Or workaround of some kind, may be?
According to AWS Documentation:
If two writes are made to a single non-versioned object at the same time, it is possible that only a single event notification will be sent. If you want to ensure that an event notification is sent for every successful write, you can enable versioning on your bucket. With versioning, every successful write will create a new version of your object and will also send an event notification.
https://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html
Also, why not directly trigger lambda from S3?
Two possibilities:
Some events may be delayed or not sent at all: "Amazon S3 event notifications typically deliver events in seconds but can sometimes take a minute or longer. On very rare occasions, events might be lost.", although it is very rare.
You have some mistake and the lambda is either not printing what you expect when processing this message / you don't search correctly for the log.
You should also make sure on SQS that all the records were ingested and processed successfully.
Make sure that you have all of the create object events checked off as a trigger.
I had an issue where files > 8MB were being uploaded as multi-part uploads which are listed as another trigger separately to the PUT trigger.

How to determine how many times my lambda executed for a certain of time

I have one Lambda it is executed on s3 put item trigger.
Now in s3 any objects uploaded lambda is triggering..
Let say Some one uploaded 5 files in s3 so each time it will execute the lambda for 5 files...
Is there any way that lambda can trigger only one time for all those 5 files...
Can I trace after complete of 5 time triggers/lambda execution...How many minutes lambda is not executing as no files uploaded..
Any help will really helpful for me
If you have the bucket notification configured for object create (s3:ObjectCreated) and if you haven't specified a filter or filter satisfies the uploaded objects your lambda will get triggered each time for per uploaded object.
To see the number of invocations, you can look at the Invocations metric for your lambda function in Cloudwatch metrics
You may want to add a queue that will handle the requests to process new files on S3.
A relevant one could be Kinesis data stream / SQS. If batching is important to you, Kinesis will be probably better.
The requests can be sent by a lambda triggered by S3 as you described, but it will only send the request to the queue, and another lambda will then process it. A simpler way will be to send the request in the same code that puts the object in S3 (if possible).
This way you can have statistics of how many requests were sent, processed, waiting, etc.

process files put into s3 bucket in AWS lambda in the order in which they were put

My current workflow is as follows:
User drops file into s3 bucket -> s3 bucket triggers event to lambda -> lambda processes the file in s3 bucket. It also invokes other lambdas.
I want to handle the scenario where multiple users will drop files in the s3 bucket simultaneously. I want to process the files such that the file put first gets processed first. To handle this, I want the lambda to process each file in a gap of 15 minutes (for example).
So, I want to use SQS to queue the input file drop events. S3 can trigger an event to SQS. A cloudwatch event can trigger a lambda in every 15 minutes, and the lambda can poll the SQS queue for the first s3 file drop event, and process it.
The problem with SQS is that Standard SQS queues do not adhere to order, and FIFO SQS queues are not compatible with S3 (Ref: Error setting up notifications from S3 bucket to FIFO SQS queue due to required ".fifo" suffix)
What approach should I use to solve this problem?
Thanks,
Swagatika
You could have Amazon S3 trigger an AWS Lambda function, which then pushes the file info into a FIFO Amazon SQS queue.
There is a new capability where SQS can trigger Lambda, but you'd have to experiment to see how/whether that works with FIFO queues. If it works well, that could eliminate the '15 minutes' thing.