Cannot launch Vertex Hypertune Job (Google Cloud Platform) - google-cloud-ml

I am trying to use Vertex Hypertune from the google cloud console. I filled the forms to indicate my dataset, python package, compute resources, and so on.
Everything seems good up until the point where I submitted the job, I got an instant error (so probably it is not an issue from my code because there is no way it has even been run):
Unable to parse `training_pipeline.training_task_inputs` into hyperparameter tuning task `inputs` defined in the file gs://google-cloud-aiplatform/schema/trainingjob/definition/hyperparameter_tuning_task_1.0.0.yaml
I am really confused about why I get this error as I launched a training job without hyperparameter tuning with the same arguments and it worked just fine.
Any help would be truly appreciated
Note: I used a tabular dataset that comes from a BigQuery table (loaded with the dataset functionality). Default parameters were chosen for this dataset.
I picked the tensorflow 1.15 pre-built container and added my python code in an archive .tar.gz (generated with python setup.py sdist).
I configured only one hyperparameter (learning rate, double, between 0.001 and 0.1 to maximize 'accuracy', as declared in hypertune) and picked the lightest standard machine (n1-standard-4).
EDIT: Following the comment of #Jofre, it works now so it was probably originated by a temporary UI bug.

Related

AI Platform Built-in Image Classification Algorithm doesn't export a model at end of training

I've been training using the new AI Platform Built-in Image Classification Algorithm. Often times, despite the training job completing successfully, a saved model is not output to the gs job directory. There are no errors in the logs. The only indication in the logs that something is wrong is the absence of the following lines:
Performing best model export
SavedModel written to: jobDirSubDir/saved_model.pb
Export best SavedModel from jobDirSubDir to jobDirSubDir/model
The job simply completes after the final evaluation is finished.
Any hints for how to troubleshoot this built-in algorithm would be greatly appreciated. Or, if it's open source please point me at the correct repo.
Thanks

How do I write a google cloud dataflow transform mapping?

I'm upgrading a google cloud dataflow job from dataflow java sdk 1.8 to version 2.4 and then trying to update its existing dataflow job on google cloud using the --update and --transformNameMapping arguments, but I can't figure out how to properly write the transformNameMappings such that the upgrade succeeds and passes the compatibility check.
My code fails at the compatibility check with the error:
Workflow failed. Causes: The new job is not compatible with 2018-04-06_13_48_04-12999941762965935736. The original job has not been aborted., The new job is missing steps BigQueryIO.writeTableRows/BigQueryIO.StreamWithDeDup/Reshuffle/GroupByKey, PubsubIO.readStrings. If these steps have been renamed or deleted, please specify them with the update command.
The dataflow transform names for the existing, currently running job are:
PubsubIO.Read
ParDo(ExtractJsonPath) - A custom function we wrote
ParDo(AddMetadata) - Another custom function we wrote
BigQueryIO.Write
In my new code that uses the 2.4 sdk, I've changed the 1st and 4th transforms/functions because of some libraries being renamed and deprecation of some of the old sdk's functions in the new version.
You can see the specific transform code below:
The 1.8 SDK version:
PCollection<String> streamData =
pipeline
.apply(PubsubIO.Read
.timestampLabel(PUBSUB_TIMESTAMP_LABEL_KEY)
//.subscription(options.getPubsubSubscription())
.topic(options.getPubsubTopic()));
streamData
.apply(ParDo.of(new ExtractJsonPathFn(pathInfos)))
.apply(ParDo.of(new AddMetadataFn()))
.apply(BigQueryIO.Write
.withCreateDisposition(BigQueryIO.Write.CreateDisposition.CREATE_IF_NEEDED)
.withWriteDisposition(BigQueryIO.Write.WriteDisposition.WRITE_APPEND)
.to(tableRef)
The 2.4 SDK version I rewrote:
PCollection<String> streamData =
pipeline
.apply("PubsubIO.readStrings", PubsubIO.readStrings()
.withTimestampAttribute(PUBSUB_TIMESTAMP_LABEL_KEY)
//.subscription(options.getPubsubSubscription())
.fromTopic(options.getPubsubTopic()));
streamData
.apply(ParDo.of(new ExtractJsonPathFn(pathInfos)))
.apply(ParDo.of(new AddMetadataFn()))
.apply("BigQueryIO.writeTableRows", BigQueryIO.writeTableRows()
.withCreateDisposition(BigQueryIO.Write.CreateDisposition.CREATE_IF_NEEDED)
.withWriteDisposition(BigQueryIO.Write.WriteDisposition.WRITE_APPEND)
.to(tableRef)
So it seems to me like PubsubIO.Read should map to PubsubIO.readStrings and BigQueryIO.Write should map to BigQueryIO.writeTableRows. But I could be misunderstanding how this works.
I've been trying a wide variety of things - I tried to give those two transforms that I'm failing to remap defined names as they formerly were not explicity named, so I updated my applys to .apply("PubsubIO.readStrings" and .apply("BigQueryIO.writeTableRows" and then set my transformNameMapping argument to:
--transformNameMapping={\"BigQueryIO.Write\":\"BigQueryIO.writeTableRows\",\"PubsubIO.Read\":\"PubsubIO.readStrings\"}
or
--transformNameMapping={\"BigQueryIO.Write/BigQueryIO.StreamWithDeDup/Reshuffle/GroupByKey\":\"BigQueryIO.writeTableRows/BigQueryIO.StreamWithDeDup/Reshuffle/GroupByKey\",\"PubsubIO.Read\":\"PubsubIO.readStrings\"}
or even trying to remap all the internal transforms inside the composite transform
--transformNameMapping={\"BigQueryIO.Write/BigQueryIO.StreamWithDeDup/Reshuffle/GroupByKey\":\"BigQueryIO.writeTableRows/BigQueryIO.StreamWithDeDup/Reshuffle/GroupByKey\",\"BigQueryIO.Write/BigQueryIO.StreamWithDeDup/Reshuffle\":\"BigQueryIO.writeTableRows/BigQueryIO.StreamWithDeDup/Reshuffle\",\"BigQueryIO.Write/BigQueryIO.StreamWithDeDup\":\"BigQueryIO.writeTableRows/BigQueryIO.StreamWithDeDup\",\"BigQueryIO.Write\":\"BigQueryIO.writeTableRows\",\"PubsubIO.Read\":\"PubsubIO.readStrings\"}
but I seem to get the same exact error no matter what:
The new job is missing steps BigQueryIO.writeTableRows/BigQueryIO.StreamWithDeDup/Reshuffle/GroupByKey, PubsubIO.readStrings.
Wondering if I'm doing something seriously wrong? Anybody whose written a transform mapping before who would be willing to share the format they used? I can't find any examples online at all besides the main google documentation on updating dataflow jobs which doesn't really cover anything but the most simple case --transformNameMapping={"oldTransform1":"newTransform1","oldTransform2":"newTransform2",...} and doesn't make the example very concrete.
It turns out there was additional information in the logs in the google cloud web console dataflow job details page that I was missing. I needed to adjust the log level from info to show any log level and then I found several step fusion messages like for example (although there were far more):
2018-04-16 (13:56:28) Mapping original step BigQueryIO.Write/BigQueryIO.StreamWithDeDup/Reshuffle/GroupByKey to write/StreamingInserts/StreamingWriteTables/Reshuffle/GroupByKey in the new graph.
2018-04-16 (13:56:28) Mapping original step PubsubIO.Read to PubsubIO.Read/PubsubUnboundedSource in the new graph.
Instead of trying to map PubsubIO.Read to PubsubIO.readStrings I needed to map to the steps that I found mentioned in that additional logging. In this case I got past my errors by mapping PubsubIO.Read to PubsubIO.Read/PubsubUnboundedSource and BigQueryIO.Write/BigQueryIO.StreamWithDeDup to BigQueryIO.Write/StreamingInserts/StreamingWriteTables. So try mapping your old steps to those that are mentioned in the full logs before the job failure message in the logs.
Unfortunately I'm not working through a failure of the compatibility check due to a change in the coder used from the old code to the new code, but my missing step errors are solved.

Code changes needed for custom distributed ML Engine Experiment

I completed this tutorial on distributed tensorflow experiments within an ML Engine experiment and I am looking to define my own custom tier instead of the STANDARD_1 tier that they use in their config.yaml file. If using the tf.estimator.Estimator API, are any additional code changes needed to create a custom tier of any size? For example, the article suggests: "If you distribute 10,000 batches among 10 worker nodes, each node works on roughly 1,000 batches." so this would suggest the config.yaml file below would be possible
trainingInput:
scaleTier: CUSTOM
masterType: complex_model_m
workerType: complex_model_m
parameterServerType: complex_model_m
workerCount: 10
parameterServerCount: 4
Are any code changes needed to the mnist tutorial to be able to use this custom configuration? Would this distribute the X number of batches across the 10 workers as the tutorial suggests would be possible? I poked around some of the other ML Engine samples and found that reddit_tft uses distributed training, but they appear to have defined their own runconfig.cluster_spec within their trainer package: task.pyeven though they are also using the Estimator API. So, is there any additional configuration needed? My current understanding is that if using the Estimator API (even within your own defined model) that there should not need to be any additional changes.
Does any of this change if the config.yaml specifies using GPUs? This article suggests for the Estimator API "No code changes are necessary as long as your ClusterSpec is configured properly. If a cluster is a mixture of CPUs and GPUs, map the ps job name to the CPUs and the worker job name to the GPUs." However, since the config.yaml is specifically identifying the machine type for parameter servers and workers, I am expecting that within ML-Engine the ClusterSpec will be configured properly based on the config.yaml file. However, I am not able to find any ml-engine documentation that confirms no changes are needed to take advantage of GPUs.
Last, within ML-Engine I am wondering if there are any ways to identify usage of different configurations? The line "If you distribute 10,000 batches among 10 worker nodes, each node works on roughly 1,000 batches." suggests that the use of additional workers would be roughly linear, but I don't have any intuition around how to determine if more parameter servers are needed? What would one be able to check (either within the cloud dashboards or tensorboard) to determine if they have a sufficient number of parameter servers?
are any additional code changes needed to create a custom tier of any size?
No; no changes are needed to the MNIST sample to get it to work with different number or type of worker. To use a tf.estimator.Estimator on CloudML engine, you must have your program invoke learn_runner.run, as exemplified in the samples. When you do so, the framework reads in the TF_CONFIG environment variables and populates a RunConfig object with the relevant information such as the ClusterSpec. It will automatically do the right thing on Parameter Server nodes and it will use the provided Estimator to start training and evaluation.
Most of the magic happens because tf.estimator.Estimator automatically uses a device setter that distributes ops correctly. That device setter uses the cluster information from the RunConfig object whose constructor, by default, uses TF_CONFIG to do its magic (e.g. here). You can see where the device setter is being used here.
This all means that you can just change your config.yaml by adding/removing workers and/or changing their types and things should generally just work.
For sample code using a custom model_fn, see the census/customestimator example.
That said, please note that as you add workers, you are increasing your effective batch size (this is true regardless of whether or not you are using tf.estimator). That is, if your batch_size was 50 and you were using 10 workers, that means each worker is processing batches of size 50, for an effective batch size of 10*50=500. Then if you increase the number of workers to 20, your effective batch size becomes 20*50=1000. You may find that you may need to decrease your learning rate accordingly (linear seems to generally work well; ref).
I poked around some of the other ML Engine samples and found that
reddit_tft uses distributed training, but they appear to have defined
their own runconfig.cluster_spec within their trainer package:
task.pyeven though they are also using the Estimator API. So, is there
any additional configuration needed?
No additional configuration needed. The reddit_tft sample does instantiate its own RunConfig, however, the constructor of RunConfig grabs any properties not explicitly set during instantiation by using TF_CONFIG. And it does so only as a convenience to figure out how many Parameter Servers and workers there are.
Does any of this change if the config.yaml specifies using GPUs?
You should not need to change anything to use tf.estimator.Estimator with GPUs, other than possibly needing to manually assign ops to the GPU (but that's not specific to CloudML Engine); see this article for more info. I will look into clarifying the documentation.

AWS ec2 to run a python program using latex and OpenCV

A friend and I are working on a machine learning project together. We've managed to collect about 5,000 tex documents (we hope to get up to around 100,000 soon). We have a python script that we run on each document to do some text manipulation, extract particular parts of the tex code, compile the parts, convert the compiled parts to cropped PNG images, and search a converted PNG of the full tex for the cropped images using OpenCV. The code takes between 30 seconds and 2 minutes on the documents we've tried so far, so we really need to speed it up.
I've been tasked with gaining access to a computer cluster and figuring out how to implement our code on such a cluster. Someone suggested I look into using AWS, so I've made an account and have been trying to figure out how to use EC2 for the past few hours. Am I on the right track, or is there some other part of AWS or something else entirely that would be better suited to my task?
Whatever I use, it has to have access to the various python libraries in our code and to pdflatex and the full set of tex packages. Is this possible on EC2? I have almost no idea how to go about using EC2 (I've managed to start some instances, but how do I use them to run my script? and do I need to change my python script to accomodate the parallel processing, or does EC2 take care of that somehow? is it as easy as starting a linux instance and installing the programs I need like I would on any other linux machine?). None of the tutorials are immediately useful, and I'm still not even sure if EC2 is capable of doing what I'm looking for. Any advice is appreciated.
I wouldn't normally answer this kind of question but it sounds like you are doing something interesting. So let's have a go
Q1.
"We have a python script that we run on each document to do some text
manipulation, extract particular parts of the tex code, compile the
parts, convert the compiled parts to cropped PNG images, and search a
converted PNG of the full tex for the cropped images using OpenCV.. we
really need to speed it up"
Probably you could split the 100,000 documents into 10 parts and set up
10 instances of the processing software and do the run in parallel.
To set up 10 instances the same, there are many methods but one of the simpler ways is to set up one machine as desired, take a snapshot, make an AMI and then
use the AMI to launch many more copies.
There might be an extra step with putting the results of the search into some
kind of central database.
I don't know anything about OpenCV but there are several suggestions that with a G3 instance type (this has a GPU) it might go faster. Google for "Open CV on AWS"
Q2.
"trying to figure out how to use EC2 for the past few hours. Am I on
the right track, or is there some other part of AWS or something else
entirely that would be better suited to my task?"
EC2 is a general purpose virtual machine, so if you already have code that runs on
some other machine it is easy to move it to EC2
EC2 has many features but one you might find interesting is "spot instances", these are short lived but cheap ( typically 10% of the price ) instance launch
Q3.
Whatever I use, it has to have access to the various python libraries
in our code and to pdflatex and the full set of tex packages. Is this
possible on EC2?
Yes, they will pip install or install from packages just like any other system
Q4.
how do I use them to run my script? and do I need to change my python
script to accomodate the parallel processing, or does EC2 take care of
that somehow? is it as easy as starting a linux instance and
installing the programs I need like I would on any other linux
machine?
As described above your basic task seems to scale well, you may need a step to
collate the results. Yes it is basically the same as any other linux machine

Weka - Measuring testing time

I'm using Weka 3.6.8 to carry out some machine learning and I'm want to find the 'time taken to test model on training/testing data'. When I test a predictive model on evaluation data, this parameter seems to be missing. Has this feature been removed from Weka or is it just a setting I'm missing? All I seem to be able to find is the time taken to build the actual predictive model. (I've also checked the Weka Manual but can't find anything)
Thanks in advance
That feature was added to 3.7.7, you need to upgrade. You should be able to get this data by running the test on the command line with the -T parameter.