pleas help me
i cannot solve this
ERROR: (gcloud.beta.ml.models.versions.create) FAILED_PRECONDITION: Field: version.deployment_uri Error: The model directory gs://valued-aquifer-164405-ml/mnist_deployable_garu_20170413_150711/model/ is expected to contain exactly one of the following: the 'export.meta' file, or 'saved_model.pb' file or 'saved_model.pbtxt' file.Please make sure one of these files exists and you have read access to it.
I am new to Google Cloud. I have also got the same kind of issue. When I was trying to create version for model. I have resolved it.
you need to do two steps:
Export model --> it will give you saved_model.pbtxt, I am using tensorflow so I have used export_savedmodel()
Upload saved_model.pbtxt & variables folder to storage
and try
This command has since been updated to gcloud ml-engine versions create.
It is recommended to run gcloud components update to install the latest GCloud, then follow the new instructions for deploying your own models to Cloud ML Engine.
Note: If you experience issues with GCloud in the future, it is recommended to report the issue in a Public Issue Tracker.
Related
For a project, I have to create a Dataproc cluster that has one of the outdated versions (for example, 1.3.94-debian10) that contain the vulnerabilities in Apache Log4j 2 utility. The goal is to get the alert related (DATAPROC_IMAGE_OUTDATED), in order to check how SCC works (it is just for a test environment).
I tried to run the command gcloud dataproc clusters create dataproc-cluster --region=us-east1 --image-version=1.3.94-debian10 but got the following message ERROR: (gcloud.dataproc.clusters.create) INVALID_ARGUMENT: Selected software image version 1.3.94-debian10 is vulnerable to remote code execution due to a log4j vulnerability (CVE-2021-44228) and cannot be used to create new clusters. Please upgrade to image versions >=1.3.95, >=1.4.77, >=1.5.53, or >=2.0.27. For more information, see https://cloud.google.com/dataproc/docs/guides/recreate-cluster, which makes sense, in order to protect the cluster.
I did some research and discovered that I will have to create a custom image with said version and generate the cluster from that. The thing is, I have tried to read the documentation or find some tutorial, but I still can't understand how to start or to run the file generate_custom_image.py, for example, since I am not confortable with cloud shell (I prefer the console).
Can someone help? Thank you
I'm trying to run a custom prediction routine on Google's AI Platform, but always get an error when I include spaCy as a required package in my setup.py:
gcloud beta ai-platform versions create v1 --model MODEL_NAME --python-version=3.7 --runtime-version=1.15 --package-uris=gs://PATH_TO_PACKAGE --machine-type=mls1-c4-m2 --origin=gs://PATH_TO_MODEL --prediction-class=basic_predictor.BasicPredictor
Using endpoint [https://ml.googleapis.com/]
Creating version (this might take a few minutes)......failed.
ERROR: (gcloud.beta.ai-platform.versions.create) Create Version failed. Bad model detected with error: "There was a problem processing the user code: basic_predictor.BasicPredictor cannot be found. Please make sure (1) prediction_class is the fully qualified function name, and (2) it uses the correct package name as provided by the package_uris: ['gs://PATH_TO_PACKAGE'] (Error code: 4)"
As soon as I remove spaCy as a dependency, the AI Platform is able to create the version, so it looks like incorrect function names or package names cannot be the problem. Obviously, my model relies on spaCy, so leaving it out is not an option.
Does anyone know how to fix this?
This seems to be an issue on how the dependencies are being installed on AI Platform prediction nodes. I replicated the issue and got the same error, I also tried to package the library as a tar.gz file but it failed in the same way.
I went ahead and reported this issue in GCP IssueTracker so the AI Platform team can investigate it, you can subscribe to it, to receive notifications whenever there's an update.
I am turning into GCP "Google cloud platform" to train a Keras model using google's powerful GPUs, for that I created an instance of VM on which I run a JupyterLab notebook.
I found my self unable to access my data that is stored as a bucket on google storage.
I found this small doc, under python, they define two function allowing to create and fill a dataset. my problem here is that I couldn't install the datalabeling_v1beta1 module.
I already tried the command below but had no result.
! gcloud components install datalab
I am new to GCP, so I really don't know much about the terminology, my goal for the moment is to uplaod my set of data to be able to use it as if I were on Google Colab or on my local machine.
Please refer to installing dependencies
Create a new notebook, File -> New -> Notebook
%pip install google-cloud-datalabeling
For Data labeling reference
I have a daily export that backs up my Datastore to a Cloud Storage bucket on Google Cloud Platform. I followed the directions as described here: https://cloud.google.com/datastore/docs/schedule-export
I do not specify by kind or namespace. As such, the storage bucket contains a folder structure looking like
Buckets/<bucket-name>/<YYYYMMDD-######>/default_namespace/all_kinds/
In which are the output-### files.
I want to import from this backup, but I only want one Kind of entity. According to this reference: https://cloud.google.com/sdk/gcloud/reference/datastore/import
I should be able to do that with this command: $ gcloud datastore import --kinds='Customer','Order'
However, when I run the following with the variables filled in:
$ gcloud datastore import --kinds='<kind>' gs://<bucket-name>/YYYYMMDD-######/YYYYMMDD-######.overall_export_metadata
ERROR: (gcloud.datastore.import) INVALID_ARGUMENT: The requested kinds/namespaces are not available
I get the above error. I have tried with multiple Kinds which I am sure are part of the Datastore.
Am I able to import by specific Kind if I exported without heed to Kind?
Thanks.
Hello I'm working at Google Cloud. I've been able to reproduce your case and it seems that it is an issue in Google Cloud Datastore. I created an entry for you in issuetracker, you can stay tuned here. Thank you for reporting.
I am working on GCP to predict, I'm using the census dataset, actually I'm discovering google APIs ( ML Engine ...).
When I launch the prediction job , the job runs successfully, but it doesn't display the result.
Can anyone help ? Do you have any idea why it doesn't generate an output ?
Thanks in advance :)
This is the error that occurs
https://i.stack.imgur.com/9gyTb.png
This error is common when you train with one version of TF and then try serving with a lower version. For instance, if you are using Cloud console to deploy your model, it currently has no way of letting you select the version of TensorFlow for serving, so the model is deployed using TF 1.0, but your model may have been trained with a higher version of TF (current version is 1.7).
Although the Cloud console doesn't currently let you select the version (but it will soon!), using gcloud or the REST API directly does allow you to.
In the docs, there is a section on creating a model that has code snippets under "gcloud" and "python". With gcloud you simply add the argument --runtime-version=1.6 (or whatever version) and with python you add the property "runtimeVersion": "1.6" to the body of the request.