How to reduce the initialisation and termination time in google dataflow job? - google-cloud-platform

I'm currently working on a POC and primarily focusing on Dataflow for ETL processing. I have created the pipeline using Dataflow 2.1 Java Beam API, and it takes about 3-4 minutes just to initialise, and also it takes about 1-2 minutes for termination on each run. However, the actual transformation (ParDo) takes less than a minute. Moreover, I tried running the jobs by following different approaches,
Running the job on local machine
Running the job remotely on GCP
Running the job via Dataflow template
But it looks like, all the above methods consume more or less same time for initialization and termination. So this is being a bottleneck for the POC as we intend to run hundreds of jobs every day.
I'm looking for a way to share the initialisation/termination time across all jobs so that it can be a one-time activity or any other approaches to reduce the time.
Thanks in advance!

From what I know, there are no ways to reduce startup or teardown time. You shouldn't consider that to be a bottleneck, as each run of a job is independent of the last one, so you can run them in parallel, etc. You could also consider converting this to a streaming pipeline if that's an option to eliminate those times entirely.

Related

How to reduce the time taken by the glue etl job(spark) to actually start executing?

I want to start a glue etl job, though the execution is fair (time concerns), however, the time taken by glue to actually start executing the job is too much.
I looked into various documentation and answers but none of them could give me the solution. There was some explanation of this behavior: cold start but no solution.
I expect to have the job up asap, it takes sometimes around 10 mins to start a job which gets executed in 2 mins.
Unfortunately it's not possible now. Glue uses EMR under the hood and it requires some time to spin up a new cluster with desired number of executors. As far as I know they have a pool of spare EMR clusters with some most common DPU configurations so if you are lucky your job can get one and start immediately, otherwise it will wait.

Optimizing apache beam / cloud dataflow startup

I have done a few tests with apache-beam using both auto-scale workers and 1 worker, and each time I see a startup time of around 2 minutes. Is it possible to reduce that time, and if so, what are the suggested best practices for reducing the startup time?
IMHO: Two minutes is very fast for a product like Cloud Dataflow. Remember, Google is launching a powerful Big Data service for you that autoscales.
Compare that time to the other cloud vendors. I have seen some clusters (Hadoop) take 15 minutes to come live. In any event, you do not control the initialization process for Dataflow so there is nothing for you to improve.

Is it possible to parallelize preprocessings with tensorflow-transform on my machine?

I am trying to preprocess larger amounts of data (one tfrecord file ~1Go) using tensorflow-transform v0.11.0 and beam only locally.
My code is largely inspired from https://github.com/tensorflow/transform/blob/master/examples/census_example.py
I have a beam pipeline that works on smaller datasets (<100Mo) but the processing time increases dramatically as I add more data. Being new to tf-transform and apache Beam, I have a hard time finding causes and solutions to the problem... And I would like to avoid using google DataFlow.
My pipeline runs locally using beam directRunner, if I understood correctly, but it uses only one core. Using multiple cores could be one way to improve my preprocessing time, but I do not know if that is possible with the beam directRunner. Is there a way to make a tensorflow-transform pipeline run on multiple cores on my machine ?
I looked in the options of the beam pipeline and of the directRunner, and I can't find any indication about letting a runner access multiple cores or creating multiple directRunners for a pipeline.
Thank you very much for any help I could get !
To add to Anton's comment,
You can utilize Apache Flink to run the pipeline in parallel. More details are summarized in Tensorflow transform on beams with flink runner
You will also have to set the parallelism according to the total number of cores and start those many Flink TaskManagers. My recommendation would be to set parallelism to (total number of cores/2)
I don't believe that's supported. Direct runner's main purpose is to make sure the pipeline implements Beam model correctly. It is not optimized for production use, and will probably actually introduce inefficiencies: https://beam.apache.org/documentation/runners/direct/
As a workaround you can manually start multiple direct runner pipelines to process different portions of data.
Better option would be to use an actual parallel runner to run these kinds of jobs, e.g. you can spin up a Flink cluster: https://beam.apache.org/documentation/runners/flink/
#Ankur #Anton Thanks for your answers, I agree that this approach is not production friendly... We will try two other solutions:
tensorflow-transform on DataFlow
removing tensorflow-transform altogether and use presto to get vocabulary files for categorical inputs, compute means and standard deviations to scale numerical inputs, etc on the whole dataset

Scheduling strategy behind AWS Batch

I am wondering what the scheduling strategy behind AWS Batch looks like. The official documentation on this topic doesn't provide much details:
The AWS Batch scheduler evaluates when, where, and how to run jobs that have been submitted to a job queue. Jobs run in approximately the order in which they are submitted as long as all dependencies on other jobs have been met.
(https://docs.aws.amazon.com/batch/latest/userguide/job_scheduling.html)
"Approximately" fifo is quite vaque. Especially as the execution order I observed when testing AWS Batch did't look like fifo.
Did I miss something? Is there a possibility to change the scheduling strategy, or configure Batch to execute the jobs in the exact order in which they were submitted?
I've been using Batch for a while now, and it has always seemed to behave in roughly a FIFO manner. Jobs that are submitted first will generally be started first, but because of limitations with distributed systems, this general rule won't work out perfectly. Jobs with dependencies are kept in the PENDING state until their dependencies have completed, and then they go into the RUNNABLE state. In my experience, whenever Batch is ready to run more jobs from the RUNNABLE state, it picks the job with the earliest time submitted.
However, there are some caveats. First, if Job A was submitted first but requires 8 cores while Job B was submitted later but only requires 4 cores, Job B might be selected first if Batch has only 4 cores available. Second, after a job leaves the RUNNABLE state, it goes into STARTING while Batch downloads the Docker image and gets the container ready to run. Depending on a number of factors, jobs that were submitted at the same time may take longer or shorter in the STARTING state. Finally, if a job fails and is retried, it goes back into the PENDING state with its original time submitted. When Batch decides to select more jobs to run, it will generally select the job with the earliest submit date, which will be the job that failed. If other jobs have started before the first job failed, the first job will start its second run after the other jobs.
There's no way to configure Batch to be perfectly FIFO because it's a distributed system, but generally if you submit jobs with the same compute requirements spaced a few seconds apart, they'll execute in the same order you submitted them.

Cloud ML: Varying training time taken for the same data

I am using Google Cloud ML to for training jobs. I observe a peculiar behavior in which I observe varying time taken for the training job to complete for the same data. I analyzed the CPU and Memory utilization in the cloud ML console and see very similar utilization in both the cases(7min and 14mins).
Can anyone let me know what would be the reason for the service to take inconsistent time for the job to complete.
I have the same parameters and data in both the cases and also verified that the time spent in the PREPARING phase is pretty much the same in both cases.
Also would it matter that I schedule simultaneous multiple independent training job on the same project, if so then would like to know the rationale behind it.
Any help would be greatly appreciated.
The easiest way is to add more logging to inspect where the time was spent. You can also inspect training progress using TensorBoard. There's no VM sharing between multiple jobs, so it's unlikely caused by simultaneous jobs.
Also, the running time should be measured from the point when the job enters RUNNING state. Job startup latency varies depending on it's cold or warm start (i.e., we keep the VMs from previous job running for a while).