Code changes needed for custom distributed ML Engine Experiment - google-cloud-platform

I completed this tutorial on distributed tensorflow experiments within an ML Engine experiment and I am looking to define my own custom tier instead of the STANDARD_1 tier that they use in their config.yaml file. If using the tf.estimator.Estimator API, are any additional code changes needed to create a custom tier of any size? For example, the article suggests: "If you distribute 10,000 batches among 10 worker nodes, each node works on roughly 1,000 batches." so this would suggest the config.yaml file below would be possible
trainingInput:
scaleTier: CUSTOM
masterType: complex_model_m
workerType: complex_model_m
parameterServerType: complex_model_m
workerCount: 10
parameterServerCount: 4
Are any code changes needed to the mnist tutorial to be able to use this custom configuration? Would this distribute the X number of batches across the 10 workers as the tutorial suggests would be possible? I poked around some of the other ML Engine samples and found that reddit_tft uses distributed training, but they appear to have defined their own runconfig.cluster_spec within their trainer package: task.pyeven though they are also using the Estimator API. So, is there any additional configuration needed? My current understanding is that if using the Estimator API (even within your own defined model) that there should not need to be any additional changes.
Does any of this change if the config.yaml specifies using GPUs? This article suggests for the Estimator API "No code changes are necessary as long as your ClusterSpec is configured properly. If a cluster is a mixture of CPUs and GPUs, map the ps job name to the CPUs and the worker job name to the GPUs." However, since the config.yaml is specifically identifying the machine type for parameter servers and workers, I am expecting that within ML-Engine the ClusterSpec will be configured properly based on the config.yaml file. However, I am not able to find any ml-engine documentation that confirms no changes are needed to take advantage of GPUs.
Last, within ML-Engine I am wondering if there are any ways to identify usage of different configurations? The line "If you distribute 10,000 batches among 10 worker nodes, each node works on roughly 1,000 batches." suggests that the use of additional workers would be roughly linear, but I don't have any intuition around how to determine if more parameter servers are needed? What would one be able to check (either within the cloud dashboards or tensorboard) to determine if they have a sufficient number of parameter servers?

are any additional code changes needed to create a custom tier of any size?
No; no changes are needed to the MNIST sample to get it to work with different number or type of worker. To use a tf.estimator.Estimator on CloudML engine, you must have your program invoke learn_runner.run, as exemplified in the samples. When you do so, the framework reads in the TF_CONFIG environment variables and populates a RunConfig object with the relevant information such as the ClusterSpec. It will automatically do the right thing on Parameter Server nodes and it will use the provided Estimator to start training and evaluation.
Most of the magic happens because tf.estimator.Estimator automatically uses a device setter that distributes ops correctly. That device setter uses the cluster information from the RunConfig object whose constructor, by default, uses TF_CONFIG to do its magic (e.g. here). You can see where the device setter is being used here.
This all means that you can just change your config.yaml by adding/removing workers and/or changing their types and things should generally just work.
For sample code using a custom model_fn, see the census/customestimator example.
That said, please note that as you add workers, you are increasing your effective batch size (this is true regardless of whether or not you are using tf.estimator). That is, if your batch_size was 50 and you were using 10 workers, that means each worker is processing batches of size 50, for an effective batch size of 10*50=500. Then if you increase the number of workers to 20, your effective batch size becomes 20*50=1000. You may find that you may need to decrease your learning rate accordingly (linear seems to generally work well; ref).
I poked around some of the other ML Engine samples and found that
reddit_tft uses distributed training, but they appear to have defined
their own runconfig.cluster_spec within their trainer package:
task.pyeven though they are also using the Estimator API. So, is there
any additional configuration needed?
No additional configuration needed. The reddit_tft sample does instantiate its own RunConfig, however, the constructor of RunConfig grabs any properties not explicitly set during instantiation by using TF_CONFIG. And it does so only as a convenience to figure out how many Parameter Servers and workers there are.
Does any of this change if the config.yaml specifies using GPUs?
You should not need to change anything to use tf.estimator.Estimator with GPUs, other than possibly needing to manually assign ops to the GPU (but that's not specific to CloudML Engine); see this article for more info. I will look into clarifying the documentation.

Related

When applying WriteFiles to an unbounded PCollection, must specify number of output shards explicitly

So far I've been trying to write 1 parquet file per Window object, but I get so many small files in the end that I can't figure out what's going on until I saw a method that I forgot about, withNumSharsd().
I was using it as all the examples lead to that and in development I didn't need more than that.
Once I tested it with much more events, the wall time started to increase exponentially until more than 1 day!
So, digging into the docs in the code it basically says that placing a 0 instead of any other number will lead to run-time specification of those required shards.
When running mvn compile, the following message pops up.
When applying WriteFiles to an unbounded PCollection, must specify number of output shards explicitly
Isn't there an option which allows you to specify the number of shards when deploying the Dataflow job?
I've tried adding --outputNumShards=20 --errorOutputNumShards=10 to Dexec.args.
It seems that if you use WriteFiles for files writing (all FileBasedSink IOs use it under the hood, like FileIO, TextIO, etc) then you still need to set number of shards manually by withNumShards(int) for unbounded sources and it should be greater than 0 (see: https://github.com/apache/beam/blob/release-2.16.0/sdks/java/core/src/main/java/org/apache/beam/sdk/io/WriteFiles.java#L314)

How to log more frequently than evaluating with `ray.tune.Trainable`

I am interested in using the tune library for reinforcement learning and I would like to use the in-built tensorboard capability. However, the metric that I am using to tune my hyperparameters is based on a time-consuming evaluation procedure that should be run infrequently.
According to the documentation, it looks like the _train method returns a dictionary that is used both for logging and for tuning hyperparameters. Is it possible either to perform logging more frequently within the _train method? Alternately, could I return the values that I wish to log from the _train method but some of the time omit the expensive-to-compute metric from the dictionary?
One option is to use your own logging mechanism in the Trainable. You can log to the trial-specific directory (Trainable.logdir). If this conflicts with the built-in Tensorboard logging, you can remove that by setting tune.run(loggers=None).
Another option is to, as you mentioned, some of the time omit the expensive-to-compute metric from the dictionary. If you run into issues with that, you can also return "None" as the value for those metrics that you don't plan to compute in a particular iteration.
Hope that helps!

AWS ec2 to run a python program using latex and OpenCV

A friend and I are working on a machine learning project together. We've managed to collect about 5,000 tex documents (we hope to get up to around 100,000 soon). We have a python script that we run on each document to do some text manipulation, extract particular parts of the tex code, compile the parts, convert the compiled parts to cropped PNG images, and search a converted PNG of the full tex for the cropped images using OpenCV. The code takes between 30 seconds and 2 minutes on the documents we've tried so far, so we really need to speed it up.
I've been tasked with gaining access to a computer cluster and figuring out how to implement our code on such a cluster. Someone suggested I look into using AWS, so I've made an account and have been trying to figure out how to use EC2 for the past few hours. Am I on the right track, or is there some other part of AWS or something else entirely that would be better suited to my task?
Whatever I use, it has to have access to the various python libraries in our code and to pdflatex and the full set of tex packages. Is this possible on EC2? I have almost no idea how to go about using EC2 (I've managed to start some instances, but how do I use them to run my script? and do I need to change my python script to accomodate the parallel processing, or does EC2 take care of that somehow? is it as easy as starting a linux instance and installing the programs I need like I would on any other linux machine?). None of the tutorials are immediately useful, and I'm still not even sure if EC2 is capable of doing what I'm looking for. Any advice is appreciated.
I wouldn't normally answer this kind of question but it sounds like you are doing something interesting. So let's have a go
Q1.
"We have a python script that we run on each document to do some text
manipulation, extract particular parts of the tex code, compile the
parts, convert the compiled parts to cropped PNG images, and search a
converted PNG of the full tex for the cropped images using OpenCV.. we
really need to speed it up"
Probably you could split the 100,000 documents into 10 parts and set up
10 instances of the processing software and do the run in parallel.
To set up 10 instances the same, there are many methods but one of the simpler ways is to set up one machine as desired, take a snapshot, make an AMI and then
use the AMI to launch many more copies.
There might be an extra step with putting the results of the search into some
kind of central database.
I don't know anything about OpenCV but there are several suggestions that with a G3 instance type (this has a GPU) it might go faster. Google for "Open CV on AWS"
Q2.
"trying to figure out how to use EC2 for the past few hours. Am I on
the right track, or is there some other part of AWS or something else
entirely that would be better suited to my task?"
EC2 is a general purpose virtual machine, so if you already have code that runs on
some other machine it is easy to move it to EC2
EC2 has many features but one you might find interesting is "spot instances", these are short lived but cheap ( typically 10% of the price ) instance launch
Q3.
Whatever I use, it has to have access to the various python libraries
in our code and to pdflatex and the full set of tex packages. Is this
possible on EC2?
Yes, they will pip install or install from packages just like any other system
Q4.
how do I use them to run my script? and do I need to change my python
script to accomodate the parallel processing, or does EC2 take care of
that somehow? is it as easy as starting a linux instance and
installing the programs I need like I would on any other linux
machine?
As described above your basic task seems to scale well, you may need a step to
collate the results. Yes it is basically the same as any other linux machine

When does an action not run on the driver in Apache Spark?

I have just started with Spark and was struggling with the concept of tasks.
Can any one please help me in understanding when does an action (say reduce) not run in the driver program.
From the spark tutorial,
"Aggregate the elements of the dataset using a function func (which
takes two arguments and returns one). The function should be
commutative and associative so that it can be computed correctly in
parallel. "
I'm currently experimenting with an application which reads a directory on 'n' files and counts the number of words.
From the web UI the number of tasks is equal to number of files. And all the reduce functions are taking place on the driver node.
Can you please tell a scenario where the reduce function won't execute at the driver. Does a task always include "transformation+action" or only "transformation"
All the actions are performed on the cluster and results of the actions may end up on the driver (depending on the action).
Generally speaking the spark code you write around your business logic is not the program that would actually run - rather spark uses it to create a plan which will execute your code in the cluster. The plan creates a task of all the actions that can be done on a partition without the need to shuffle data around. Every time spark needs the data arranged differently (e.g. after sorting) It will create a new task and a shuffle between the first and the latter tasks
Ill take a stab at this, although I may be missing part of the question. A task is indeed always transformation(s) and an action. The transformation's are lazy and would not submit anything, thus the need for an action. You can always call .toDebugString on your RDD to see where each job split will be; each level of indentation is a new stage. I think the reduce function showing on the driver is a bit of a misnomer as it will run first in parallel and then merge the results. So, I would expect that the task does indeed run on the workers as far as it can.

How to parse freebase quad dump using Amazon mapreduce

Im trying to extract movie informations from freebase, i just need name of the movie, name and id of the director and of the actors.
I found it hard to do so using freebases topic dumps, because there is no reference to the director ID, just directors name.
What is the right approach for this task? Do i need to parse somehow whole quad dump using amazons cloud? Or is there some esy way?
You do need to use the quad dump, but it is under 4 GB and shouldn't require Hadoop, MapReduce, or any cloud processing to do. A decent laptop should be fine. On a couple year old laptop, this simple-minded command:
time bzgrep '/film/' freebase-datadump-quadruples.tsv.bz2 | wc -l
10394545
real 18m56.968s
user 19m30.101s
sys 0m56.804s
extracts and counts everything referencing the film domain in under 20 minutes. Even if you have to make multiple passes through the file (which is likely), you'll be able to complete your whole task in less than an hour, which should mean there's no need for beefy computing resources.
You'll need to traverse an intermediary node (CVT in Freebase-speak) to get the actors, but rest of your information should be connected directly to the subject film node.
Tom
First of all, I completely share Tom's point of view and his suggestion. I often use UNIX command line tools to take 'interesting' slices of data out of Freebase data dump.
However, an alternative would be to load Freebase data into a 'graph' storage system locally and use APIs and/or the query language available from that system to interact with the data for further processing.
I use RDF, since the data model is quite similar and it is very easy to convert the Freebase data dump into RDF (see: https://github.com/castagna/freebase2rdf). I then load it into Apache Jena's TDB store (http://incubator.apache.org/jena/documentation/tdb/) and use the Jena APIs or SPARQL for further processing.
Another reasonable and scalable approach would be to implement what you need to do in MapReduce, but this makes sense only if the amount of processing you do is touching a large fraction of Freebase data and it is not as trivial as counting lines. This is more expensive than using your own machine, you need an Hadoop cluster or you need to use Amazon EMR. (I should probably write a MapReduce version of freebase2rdf ;-))
My 2 cents.