Where can i found lvq in weka dev 3.7.11 jar?
As while importing this weka.classifiers.neural.lvq, it shows me an error of not found.
In the Weka Package Manager you have the LVQ clustering method, but I guess you refer to the classifier from this site?
Related
I'm trying to run a custom prediction routine on Google's AI Platform, but always get an error when I include spaCy as a required package in my setup.py:
gcloud beta ai-platform versions create v1 --model MODEL_NAME --python-version=3.7 --runtime-version=1.15 --package-uris=gs://PATH_TO_PACKAGE --machine-type=mls1-c4-m2 --origin=gs://PATH_TO_MODEL --prediction-class=basic_predictor.BasicPredictor
Using endpoint [https://ml.googleapis.com/]
Creating version (this might take a few minutes)......failed.
ERROR: (gcloud.beta.ai-platform.versions.create) Create Version failed. Bad model detected with error: "There was a problem processing the user code: basic_predictor.BasicPredictor cannot be found. Please make sure (1) prediction_class is the fully qualified function name, and (2) it uses the correct package name as provided by the package_uris: ['gs://PATH_TO_PACKAGE'] (Error code: 4)"
As soon as I remove spaCy as a dependency, the AI Platform is able to create the version, so it looks like incorrect function names or package names cannot be the problem. Obviously, my model relies on spaCy, so leaving it out is not an option.
Does anyone know how to fix this?
This seems to be an issue on how the dependencies are being installed on AI Platform prediction nodes. I replicated the issue and got the same error, I also tried to package the library as a tar.gz file but it failed in the same way.
I went ahead and reported this issue in GCP IssueTracker so the AI Platform team can investigate it, you can subscribe to it, to receive notifications whenever there's an update.
I am trying out the Aws Djl platform. I want to load a custom trained tensorflow model and perform inference. I couldnot find a direct example to in the official github that does this. Can anyone guide me on the same.
Here is demo project: https://github.com/aws-samples/djl-demo/tree/master/pneumonia-detection
And here is the document about loading tensorflow model: https://github.com/awslabs/djl/blob/master/docs/tensorflow/how_to_import_keras_models_in_DJL.md
and:
https://github.com/awslabs/djl/blob/master/docs/load_model.md
I created an Object Detection model using Google AutoML. I'd like to export the model to Core ML but on the export page this option isn't showing up. I can't find anything in the AutoML Documentation about when this export option is disabled.
Additionally, if I try to export from the command line I get the error message Unsupported model export format [core_ml] for model.
Can someone provide some clarity about why this isn't an option? Thanks in advance for your help.
The issue is with a confusion between automl vision documentation that focuses on classification models and specific automl vision object detection model documentation. In this index you can see all those docs.
As you can see in the links, for the specific case of object detection models there is no option to export to Core ML.
pleas help me
i cannot solve this
ERROR: (gcloud.beta.ml.models.versions.create) FAILED_PRECONDITION: Field: version.deployment_uri Error: The model directory gs://valued-aquifer-164405-ml/mnist_deployable_garu_20170413_150711/model/ is expected to contain exactly one of the following: the 'export.meta' file, or 'saved_model.pb' file or 'saved_model.pbtxt' file.Please make sure one of these files exists and you have read access to it.
I am new to Google Cloud. I have also got the same kind of issue. When I was trying to create version for model. I have resolved it.
you need to do two steps:
Export model --> it will give you saved_model.pbtxt, I am using tensorflow so I have used export_savedmodel()
Upload saved_model.pbtxt & variables folder to storage
and try
This command has since been updated to gcloud ml-engine versions create.
It is recommended to run gcloud components update to install the latest GCloud, then follow the new instructions for deploying your own models to Cloud ML Engine.
Note: If you experience issues with GCloud in the future, it is recommended to report the issue in a Public Issue Tracker.
How to install jRip package for ensemble learning in Weka. As part of aggregating collective clustering results through ensemble I need to install jRip package. I couldn't find any relevant link on Web giving access to same.
Please help.
The JRip algorithm is bundled with Weka. Here is a reference to it in Weka's JavaDoc:
http://weka.sourceforge.net/doc.stable/weka/classifiers/rules/JRip.html
Here is a download link:
http://www.cs.waikato.ac.nz/ml/weka/