I am working on GCP to predict, I'm using the census dataset, actually I'm discovering google APIs ( ML Engine ...).
When I launch the prediction job , the job runs successfully, but it doesn't display the result.
Can anyone help ? Do you have any idea why it doesn't generate an output ?
Thanks in advance :)
This is the error that occurs
https://i.stack.imgur.com/9gyTb.png
This error is common when you train with one version of TF and then try serving with a lower version. For instance, if you are using Cloud console to deploy your model, it currently has no way of letting you select the version of TensorFlow for serving, so the model is deployed using TF 1.0, but your model may have been trained with a higher version of TF (current version is 1.7).
Although the Cloud console doesn't currently let you select the version (but it will soon!), using gcloud or the REST API directly does allow you to.
In the docs, there is a section on creating a model that has code snippets under "gcloud" and "python". With gcloud you simply add the argument --runtime-version=1.6 (or whatever version) and with python you add the property "runtimeVersion": "1.6" to the body of the request.
Related
I'm completely confused with how SageMaker Experiments works. I used the SDK to create an Experiment and a Trial. Now I want to track job parameters, metadata and metrics.
Shall I create Trial components manually with the SDK or let SM Estimator fit create them for me??
after creating my experiment and trial, I use the below code
job.fit(inputs,
experiment_config={
"ExperimentName": reg_experiment.experiment_name,
"TrialName": trial1.trial_name,
"TrialComponentDisplayName": "training-with-RF1"},
wait=False)
When I look in Studio, I see an automatically created Trial component named "training-with-RF1".
I see here and here that we can (can = must? should? could?...) also create Trials manually, for example with
my_trial = trial.Trial.create('AutoML')
my_tracker = tracker.Tracker.create()
my_tracker.log_parameter('learning_rate', 0.01)
my_trial.add_trial_component(my_tracker)
Or here with
Trial.create(
trial_name=trial_name,
experiment_name=mnist_experiment.experiment_name,
sagemaker_boto_client=sm)
When I create trials like that manually, they appear as separate empty trials than the trials created by SageMaker jobs, see below.
I'm confused because the AWS blog post says we have to create Trials manually, however SageMaker Training jobs seem to be creating those trials on our behalf...
I'm completely confused by this service, can someone please help?
The best way to do this is to create an Experiment, a Trial and then pass the experiment config to the Training Job. The training job will automatically create a Trial Component and add it to the Trial.
Depending on the type of training job you are using, some metrics will automatically be tracked in the Trial Component. You can set this up through metric_definitions regex in the Estimator.
If you are running the training job in script mode, you can install sagemaker-experiments in the container running the job (or in the python script using subprocess.call) and import the Tracker object. You can use the Tracker to log metrics from the training script to the Trial Component.
There are some examples here - https://github.com/aws/amazon-sagemaker-examples/tree/main/sagemaker-experiments
This is the documentation for sagemaker-experiments sdk - https://sagemaker-experiments.readthedocs.io/en/latest/tracker.html
For a project, I have to create a Dataproc cluster that has one of the outdated versions (for example, 1.3.94-debian10) that contain the vulnerabilities in Apache Log4j 2 utility. The goal is to get the alert related (DATAPROC_IMAGE_OUTDATED), in order to check how SCC works (it is just for a test environment).
I tried to run the command gcloud dataproc clusters create dataproc-cluster --region=us-east1 --image-version=1.3.94-debian10 but got the following message ERROR: (gcloud.dataproc.clusters.create) INVALID_ARGUMENT: Selected software image version 1.3.94-debian10 is vulnerable to remote code execution due to a log4j vulnerability (CVE-2021-44228) and cannot be used to create new clusters. Please upgrade to image versions >=1.3.95, >=1.4.77, >=1.5.53, or >=2.0.27. For more information, see https://cloud.google.com/dataproc/docs/guides/recreate-cluster, which makes sense, in order to protect the cluster.
I did some research and discovered that I will have to create a custom image with said version and generate the cluster from that. The thing is, I have tried to read the documentation or find some tutorial, but I still can't understand how to start or to run the file generate_custom_image.py, for example, since I am not confortable with cloud shell (I prefer the console).
Can someone help? Thank you
Can we train a model by just giving data and related column names without creating trainer in Google Cloud ML either using Rest API or command line interface
Yes. You can use Google Cloud Datalab, which comes with a structured data solution. It has an easier interface and takes care of the trainer. You can view the notebooks without setting up Datalab:
https://github.com/googledatalab/notebooks/tree/master/samples/ML%20Toolbox
Once you set up Datalab, you can run the notebook. To set up Datalab, check https://cloud.google.com/datalab/docs/quickstarts.
Instead of building a model and calling CloudML service directly, you can try Datalab's ml toolbox which supports structured data and image classification. The ml toolbox takes your data, and automatically builds and trains a model. You just have to describe your data and what you want to do.
You can view the notebooks first without setting up datalab:
https://github.com/googledatalab/notebooks/tree/master/samples/ML%20Toolbox
To set up Datalab and actually run these notebooks, see https://cloud.google.com/datalab/docs/quickstarts.
I am running a Django App inside GCP. My idea was to call a python script from "view.py" for some machine learning algorithm and then display the result on the page.
But now I understand that running a machine learning library like Scikit-learn on GAE will not be possible (read Tim's answer here and this thread).
But suppose I need to still do this, I believe there are 2 ways possible, but I am not sure weather my guess is right or wrong
1) As the Google-Datalab provides the entire anaconda like distribution, if we have any datalab api which can be called from a python file in the Django app, I can achieve my goal ?
2) If I can install the scikit-learn library on any compute engine on GCP and somehow send it the request to run my code and then return the output back to the python file in the Django app ?
I am very new to client-server and cloud computing on the whole, so please provide examples (if possible) for any suggestion/ pointer for the help.
Regards,
I believe what you want is to use the App Engine Flex environment rather than the standard App Engine environment.
App Engine Flex uses a compute engine VM for running your code, so it does not have the library limitations that standard App Engine has.
Specifically, you'll need to add a 'requirements.txt' file to specify the version of scikit-learn that you want installed, and then add a 'vm: true' clause to your app.yaml file.
sklearn is now supported on ML Engine.
So, another alternative now is to use online prediction on Cloud ML Engine, and deploy your scikit-learn model as a web service.
Here is a fully worked out example of using fully-managed scikit-learn training, online prediction and hyperparameter tuning:
https://github.com/GoogleCloudPlatform/training-data-analyst/blob/master/blogs/sklearn/babyweight_skl.ipynb
I've been trying to use custom made images to run my google data flow pipeline. Given the information from https://cloud.google.com/compute/docs/reference/latest/images I've tested the following code snippets:
DataflowPipelineOptions options = PipelineOptionsFactory.create().as(DataflowPipelineOptions.class);
...
options.setDiskSourceImage("ubuntu-1504-vivid-v20150911");
options.setDiskSourceImage("projects/ubuntu-os-cloud/global/images/ubuntu-1504-vivid-v20150911");
options.setDiskSourceImage("https://www.googleapis.com/compute/beta/projects/ubuntu-os-cloud/global/images/ubuntu-1504-vivid-v20150911");
all of the above tries led to the following error in my pipeline:
(b9c7b66a676906f4): Unable to create VMs. Causes: (b9c7b66a67690aef): Error: Message: Invalid value for field 'resource.disks[0].initializeParams.sourceImage': '[edited]'. Must be the URL to a Compute resource of the correct type HTTP Code: 400
Using a custom disk image with Dataflow is not a viable option. The flag diskSourceImage is deprecated and will be removed in a future SDK release. The reason it is no longer supported is because the Dataflow service relies on versioned resources in the VM image. So Dataflow needs control of the VM image so that we can upgrade it as necessary. If users supply their own custom images we have no way of keeping them in sync with the requirements of the Dataflow service.
If your custom VM image is based off a Dataflow image then you would be able to execute jobs using that custom image until the next release of a Dataflow VM image. There is no reasonable way in which you would be able to keep your custom images in sync with Dataflow's VM images so that you would be able to keep this working.
If you would like to customize the VM image please let us know why (e.g. send us an email at dataflow-feedback#google.com) so we can either suggest an alternative solution or else consider supporting your use case in the future.
There's a subtle issue with setDiskSourceImage -- it uses 'beta' instead of the current 'v1' version for Compute Engine. If you try the following, it should work:
options.setDiskSourceImage("https://www.googleapis.com/compute/v1/projects/ubuntu-os-cloud/global/images/ubuntu-1504-vivid-v20150911");