Triggering an aws glue job when in progress - amazon-web-services

I have a lambda function which triggers a glue job to start running whenever a file is uploaded to s3. The glue job then processes the said file.
This works perfectly, but i'm wondering what will happen if another file is uploaded when the glue job is still processing the first one. Will it cause an error, will it be ignored, or will it just wait for the first one to finish, then move onto the second one?

It depends on GLue job settings that you have in place. If you have set concurrency value by setting Max concurrency, then the lambda will trigger "another version" of glue job for that new file.
You can read about it here.

Related

Is there a way to check if AWS lambda is running from java code?

There is a Lambda say my Lambda Name is - XYZ which has a s3 file upload trigger. Now lets say if I upload multiple files say around 100 and the lambda execution starts.
Is there any way I can track if the lambda is running or has processed all the files?
The reason for this is, once the lambda has completed processing all the file, I would want to trigger a step function, so for me to trigger my Step Function I would want to do that only once all the files are processed by my lambda (XYZ).
FYI: There is no current way to track how many files have been uploaded
I think it's not a good design to run a step function state machine after the lambda completes the job without having a perfect logical event.
Because you can for example receive a bunch of files (say 100) completed by the lambda, then using alarms with Cloudwatch once we complete the 100 values of a custom metric, or check value in DynamoDb or number of object in a custom folder, fires a job step function, at the same time you receive another file with +5 sec delay to complete the 101, in this case maybe you miss this file.
If you don't have a special event or a condition to have the completion of the files, then you can work with time scheduling and trigger your Step function with Cloudwatch event ( scheduled event) like every 15 min, check if there is work if not, exit the job.
Otherwise, either include the lambda (file process) in your step function as a step or change your design.

Automated Real Time Data Processing on AWS with Lambda

I am interested in doing automated real-time data processing on AWS using Lambda and I am not certain about how I can trigger my Lambda function. My data processing code involves taking multiple files and concatenating them into a single data frame after performing calculations on each file. Since files are uploaded simultaneously onto S3 and files are dependent on each other, I would like the Lambda to be only triggered when all files are uploaded.
Current Approaches/Attempts:
-I am considering an S3 trigger, but my concern is that an S3 Trigger will result in an error in the case where a single file upload triggers the Lambda to start. An alternate option would be adding a wait time but that is not preferred to limit the computation resources used.
-A scheduled trigger using Cloudwatch/EventBridge, but this would not be real-time processing.
-SNS trigger, but I am not certain if the message can be automated without knowing the completion in file uploads.
Any suggestion is appreciated! Thank you!
If you really cannot do it with a scheduled function, the best option is to trigger a Lambda function when an object is created.
The tricky bit is that it will fire your function on each object upload. So you either can identify the "last part", e.g., based on some meta data, or you will need to store and track the state of all uploads, e.g. in a DynamoDB, and do the actual processing only when a batch is complete.
Best, Stefan
Your file coming in parts might be named as -
filename_part1.ext
filename_part2.ext
If any of your systems is generating those files, then use the system to generate a final dummy blank file name as -
filename.final
Since in your S3 event trigger you can use a suffix to generate an event, use .final extension to invoke lambda, and process records.
In an alternative approach, if you do not have access to the server putting objects to your s3 bucket, then with each PUT operation in your s3 bucket, invoke the lambda and insert an entry in dynamoDB.
You need to put a unique entry per file (not file parts) in dynamo with -
filename and last_part_recieved_time
The last_part_recieved_time keeps getting updated till you keep getting the file parts.
Now, this table can be looked up by a cron lambda invocation which checks if the time skew (time difference between SYSTIME of lambda invocation and dynamoDB entry - last_part_recieved_time) is enough to process the records.
I will still prefer to go with the first approach as the second one still has a chance for error.
Since you want this to be as real time as possible, perhaps you could just perform your logic every single time a file is uploaded, updating the version of the output as new files are added, and iterating through an S3 prefix per grouping of files, like in this other SO answer.
In terms of the architecture, you could add in an SQS queue or two to make this more resilient. An S3 Put Event can trigger an SQS message, which can trigger a Lambda function, and you can have error handling logic in the Lambda function that puts that event in a secondary queue with a visibility timeout (sort of like a backoff strategy) or back in the same queue for retries.

How to process files serially in cloud function?

I have written a cloud storage trigger based cloud function. I have 10-15 files landing at 5 secs interval in cloud bucket which loads data into a bigquery table(truncate and load).
While there are 10 files in the bucket I want cloud function to process them in sequential manner i.e 1 file at a time as all the files accesses the same table for operation.
Currently cloud function is getting triggered for multiple files at a time and it fails in BIgquery operation as multiple files trying to access the same table.
Is there any way to configure this in cloud function??
Thanks in Advance!
You can achieve this by using pubsub, and the max instance param on Cloud Function.
Firstly, use the notification capability of Google Cloud Storage and sink the event into a PubSub topic.
Now you will receive a message every time that a event occur on the bucket. If you want to filter on file creation only (object finalize) you can apply a filter on the subscription. I wrote an article on this
Then, create an HTTP functions (http function is required if you want to apply a filter) with the max instance set to 1. Like this, only 1 function can be executed in the same time. So, no concurrency!
Finally, create a PubSub subscription on the topic, with a filter or not, to call your function in HTTP.
EDIT
Thanks to your code, I understood what happens. In fact, BigQuery is a declarative system. When you perform a request or a load job, a job is created and it works in background.
In python, you can explicitly wait the end on the job, but, with pandas, I didn't find how!!
I just found a Google Cloud page to explain how to migrate from pandas to BigQuery client library. As you can see, there is a line at the end
# Wait for the load job to complete.
job.result()
than wait the end of the job.
You did it well in the _insert_into_bigquery_dwh function but it's not the case in the staging _insert_into_bigquery_staging one. This can lead to 2 issues:
The dwh function work on the old data because the staging isn't yet finish when you trigger this job
If the staging take, let's say, 10 seconds and run in "background" (you don't wait the end explicitly in your code) and the dwh take 1 seconds, the next file is processed at the end of the dwh function, even if the staging one continue to run in background. And that leads to your issue.
The architecture you describe isn't the same as the one from the documentation you linked. Note that in the flow diagram and the code samples the storage events triggers the cloud function which will stream the data directly to the destination table. Since BigQuery allow for multiple streaming insert jobs several functions could be executed at the same time without problems. In your use case the intermediate table used to load with write-truncate for data cleaning makes a big difference because each execution needs the previous one to finish thus requiring a sequential processing approach.
I would like to point out that PubSub doesn't allow to configure the rate at which messages are sent, if 10 messages arrive to the topic they all will be sent to the subscriber, even if processed one at a time. Limiting the function to one instance may lead to overhead for the above reason and could increase latency as well. That said, since the expected workload is 15-30 files a day the above maybe isn't a big concern.
If you'd like to have parallel executions you may try creating a new table for each message and set a short expiration deadline for it using table.expires(exp_datetime) setter method so that multiple executions don't conflict with each other. Here is the related library reference. Otherwise the great answer from Guillaume would completely get the job done.

How to achieve Parallelism using AWS Batch Multi-node Parallel Jobs

I've got an SQS queue that will be filled with a json message when my S3 bucket has any CREATE event.
Message contains bucket and object name
Also have Docker image which contains python script that will read message from sqs. With help of that message, it will download respective object from S3. Finally script will read the object and put some values in dynamodb.
1.When submitting as single job to AWS batch, I can able achieve above use case. But it's time consuming because I have 80k object and average size of object 300 MB.
When submitting as multi-node Parallel Job. Job is getting stuck in Running state and master node goes to failed state.
Note: Object Type is MF4 (Measurement File) from vehicle logger. So need to download to local to read the object using asammdf.
Question 1: How to use AWS batch multi node parallel Job.
Question 2: Can I try any other services for achieving parallelism.
Answers with examples will be more helpful.
Thanks😊
I think you're looking for AWS Batch Array Jobs, not MNP Jobs. MNP jobs are for spreading one job across multiple hosts (MPI or NCCL).

Glue Job fails to write file

I am back filling some data via glue jobs. The job itself is reading in a TSV from s3, transforming the data slightly, and writing it in Parquet to S3. Since I already have the data, I am trying to launch multiple jobs at once to reduce the amount of time needed to process it all. When I launch multiple jobs at the same time, I run into an issue sometimes where one of the files will fail to output the resultant Parquet files in S3. The job itself completes successfully without throwing an error When I rerun the job as a non-parallel task, the file it output correctly. Is there some issue, either with glue(or the underlying spark) or S3 that would cause my issue?
The same Glue job running in parallel may produce files with the same names and therefore some of them can be overwritten. As I remember correctly, transformation-context is used as part of the name. I assume you don't have bookmarking enabled so it should be safe for you to generate transformation-context value dynamically to ensure it's unique for each job.