How to achieve Parallelism using AWS Batch Multi-node Parallel Jobs - amazon-web-services

I've got an SQS queue that will be filled with a json message when my S3 bucket has any CREATE event.
Message contains bucket and object name
Also have Docker image which contains python script that will read message from sqs. With help of that message, it will download respective object from S3. Finally script will read the object and put some values in dynamodb.
1.When submitting as single job to AWS batch, I can able achieve above use case. But it's time consuming because I have 80k object and average size of object 300 MB.
When submitting as multi-node Parallel Job. Job is getting stuck in Running state and master node goes to failed state.
Note: Object Type is MF4 (Measurement File) from vehicle logger. So need to download to local to read the object using asammdf.
Question 1: How to use AWS batch multi node parallel Job.
Question 2: Can I try any other services for achieving parallelism.
Answers with examples will be more helpful.
Thanks😊

I think you're looking for AWS Batch Array Jobs, not MNP Jobs. MNP jobs are for spreading one job across multiple hosts (MPI or NCCL).

Related

Triggering an aws glue job when in progress

I have a lambda function which triggers a glue job to start running whenever a file is uploaded to s3. The glue job then processes the said file.
This works perfectly, but i'm wondering what will happen if another file is uploaded when the glue job is still processing the first one. Will it cause an error, will it be ignored, or will it just wait for the first one to finish, then move onto the second one?
It depends on GLue job settings that you have in place. If you have set concurrency value by setting Max concurrency, then the lambda will trigger "another version" of glue job for that new file.
You can read about it here.

Running multiple apache spark streaming jobs

I'm new to Spark streaming and as I can see there are different ways of doing the same thing which makes me a bit confused.
This is the scenario:
We have multiple events (over 50 different events) happening every minute and I want to do some data transformation and then change the format from json to parquet and store the data in a s3 bucket. I'm creating a pipeline where we get the data and store it in a s3 bucket and then the transformation happens (Spark jobs). My questions are:
1- Is it good if I run a lambda function which sorts out each event type in a separate subdirectories and then read the folder in sparkStreaming? or is it better to store all the events in a same directory and then read it in my spark streaming?
2- How can I run multiple sparkStreamings at the same time? (I tried to loop through a list of schemas and folders but apparently it doesn't work)
3- Do I need an orchestration tool (airflow) for my purpose? I need to look for new events all the time with no pause in between.
I'm going to use, KinesisFirehose -> s3 (data lake) -> EMR(Spark) -> s3 (data warehouse)
Thank you so much before hand!

How to process files serially in cloud function?

I have written a cloud storage trigger based cloud function. I have 10-15 files landing at 5 secs interval in cloud bucket which loads data into a bigquery table(truncate and load).
While there are 10 files in the bucket I want cloud function to process them in sequential manner i.e 1 file at a time as all the files accesses the same table for operation.
Currently cloud function is getting triggered for multiple files at a time and it fails in BIgquery operation as multiple files trying to access the same table.
Is there any way to configure this in cloud function??
Thanks in Advance!
You can achieve this by using pubsub, and the max instance param on Cloud Function.
Firstly, use the notification capability of Google Cloud Storage and sink the event into a PubSub topic.
Now you will receive a message every time that a event occur on the bucket. If you want to filter on file creation only (object finalize) you can apply a filter on the subscription. I wrote an article on this
Then, create an HTTP functions (http function is required if you want to apply a filter) with the max instance set to 1. Like this, only 1 function can be executed in the same time. So, no concurrency!
Finally, create a PubSub subscription on the topic, with a filter or not, to call your function in HTTP.
EDIT
Thanks to your code, I understood what happens. In fact, BigQuery is a declarative system. When you perform a request or a load job, a job is created and it works in background.
In python, you can explicitly wait the end on the job, but, with pandas, I didn't find how!!
I just found a Google Cloud page to explain how to migrate from pandas to BigQuery client library. As you can see, there is a line at the end
# Wait for the load job to complete.
job.result()
than wait the end of the job.
You did it well in the _insert_into_bigquery_dwh function but it's not the case in the staging _insert_into_bigquery_staging one. This can lead to 2 issues:
The dwh function work on the old data because the staging isn't yet finish when you trigger this job
If the staging take, let's say, 10 seconds and run in "background" (you don't wait the end explicitly in your code) and the dwh take 1 seconds, the next file is processed at the end of the dwh function, even if the staging one continue to run in background. And that leads to your issue.
The architecture you describe isn't the same as the one from the documentation you linked. Note that in the flow diagram and the code samples the storage events triggers the cloud function which will stream the data directly to the destination table. Since BigQuery allow for multiple streaming insert jobs several functions could be executed at the same time without problems. In your use case the intermediate table used to load with write-truncate for data cleaning makes a big difference because each execution needs the previous one to finish thus requiring a sequential processing approach.
I would like to point out that PubSub doesn't allow to configure the rate at which messages are sent, if 10 messages arrive to the topic they all will be sent to the subscriber, even if processed one at a time. Limiting the function to one instance may lead to overhead for the above reason and could increase latency as well. That said, since the expected workload is 15-30 files a day the above maybe isn't a big concern.
If you'd like to have parallel executions you may try creating a new table for each message and set a short expiration deadline for it using table.expires(exp_datetime) setter method so that multiple executions don't conflict with each other. Here is the related library reference. Otherwise the great answer from Guillaume would completely get the job done.

AWS SNS trigger when 3 objects are loaded into S3

I have a client that performs a weekly data upload of 3 CSV files to an S3 bucket, always within (at most) 5 minutes of each other. I have python code that ingests the 3 files, aggregates them, and writes and output that I would like to use to create a lambda function to fully automate this job. The issue is that I can't configure an S3 trigger that is every 3 object creates. I could have the lambda trigger every upload and exit until the 3 files are there but I don't want to do that as it's not really cost effective.
So I came across this question here that suggested having an SNS Topic that gets notified after a batch of uploads is completed, however I'm having trouble figuring out how to configure that. Basically what I'd like to do is create something similar to a CloudWatch Alarm that triggers when 3 object PUTS have occurred within 5 minutes of each other. Is this possible? Or, how can I configure my SNS event in a way as is suggested in the linked question?
I think you are misreading the answer in the post you linked. The SNS option is literally someone just sending an SNS message after they have pushed their batch of files to S3. So the sender would need to update their process to include a new SNS message step.
Option 3 in that question/answer is about using S3 triggers to handle this in an automatic fashion, without the sender needing to do any extra steps besides upload files, but it is also much more complicated since it involves using a DynamoDB table with Atomic Counters to ensure the files aren't processed multiple times.
I would like to point out that your concern over the cost effectiveness of triggering a Lambda function for each S3 object is going to be on the order of a few pennies a month, maybe less.

"Realtime" syncing of large numbers of log files to S3

I have a large number of logfiles from a service that I need to regularly run analysis on via EMR/Hive. There are thousands of new files per day, and they can technically come out of order relative to the file name (e.g. a batch of files comes a week after the date in the file name).
I did an initial load of the files via Snowball, then set up a script that syncs the entire directory tree once per day using the 'aws s3 sync' cli command. This is good enough for now, but I will need a more realtime solution in the near future. The issue with this approach is that it takes a very long time, on the order of 30 minutes per day. And using a ton of bandwidth all at once! I assume this is because it needs to scan the entire directory tree to determine what files are new, then sends them all at once.
A realtime solution would be beneficial in 2 ways. One, I can get the analysis I need without waiting up to a day. Two, the network use would be lower and more spread out, instead of spiking once a day.
It's clear that 'aws s3 sync' isn't the right tool here. Has anyone dealt with a similar situation?
One potential solution could be:
Set up a service on the log-file side that continuously syncs (or aws s3 cp) new files based on the modified date. But wouldn't that need to scan the whole directory tree on the log server as well?
For reference, the log-file directory structure is like:
/var/log/files/done/{year}/{month}/{day}/{source}-{hour}.txt
There is also a /var/log/files/processing/ directory for files being written to.
Any advice would be appreciated. Thanks!
You could have a Lambda function triggered automatically as a new object is saved on your S3 bucket. Check Using AWS Lambda with Amazon S3 for details. The event passed to the Lambda function will contain the file name, allowing you to target only the new files in the syncing process.
If you'd like wait until you have, say 1,000 files, in order to sync in batch, you could use AWS SQS and the following workflow (using 2 Lambda functions, 1 CloudWatch rule and 1 SQS queue):
S3 invokes Lambda whenever there's a new file to sync
Lambda stores the filename in SQS
CloudWatch triggers another Lambda function every X minutes/hours to check how many files are there in SQS for syncing. Once there's 1,000 or more, it retrieves those filenames and run the syncing process.
Keep in mind that Lambda has a hard timeout of 5 minutes. If you sync job takes too long, you'll need to break it in smaller chunks.
You could set the bucket up to log HTTP requests to a separate bucket, then parse the log to look for newly created files and their paths. One troublespot, as well as PUT requests, you have to look for the multipart upload ops which are a sequence of POSTs. Best to log for a few days to see what gets created before putting any effort in to this approach