GCP: how to transfer a model to another project - google-cloud-platform

I have a Natural Language model (within a data set) in a certain project in GCP.
How can I move these dataset and its model to another project?

Let's suppose that you are using AutoML Natural Language. There is not a direct mechanism to migrate the model to another project. Nevertheless, there is a current feature request for this functionality, you can upvote the PIT for demonstrate your interest.
The only available option I can think of is exporting the dataset from the existing project, import the dataset and retrain your model in the new project.

Related

Create a model with google ML natural language or other potential service

So I have been collecting data of numerous text-descriptions about articles, where as each description was structred differently. Now, I would have to "create" an algorithm, which sorts out the title of that article for me what is a hard task. I have come around Google ML natural language and it seems to be able to create one for me.
Unfortunately, I am not really able to exactly find out how I can use it,
so my question is... How precisely can I set it up ? And additionally, it would be helpful to know if firebase has such a service, since I am planning to build a firebase project.
Thanks in advance for any help !
Unfortunately models created using Google AutoML Natural Language are not exportable to Tensorflow lite (mobile models). Based from your use case you will need a model for text classification, the provided link has a sample of how this model work. You can follow this tutorial to train a custom model using the data that you have so it can identify if a title of a article is a hard task or not.
Once training is done you can now:
Deploy it in Firebase
Download the model in your device and perform testing.
You can find detailed instructions from training the model to testing it on your device for either iOS or android.

GCP AutoML Vision - How to count the number of annotations each of my team members makes in GCP AutoML Vision Annotation Tool using the Web UI?

We are automating the process of our deep learning project. Images are automatically uploaded to a dataset in AutoML Vision (Object detection) in the Google Cloud Platform. We have a couple of team members who regularly annotate the uploaded images by using the provided Annotation Tool in Web UI. We need to measure the productivity of our team members by counting the annotations they make for each of them. I haven't found an efficient solution yet. I would appreciate it if you could share your ideas.
There is not a feature to identify who annotated which images; however, the approach I can think of is that you can split the work between your team members and distribute the labels that each one should annotate. Then you can simply count the number annotations for each label. For instance, in from this guide you can give Baked Goods and Cheese to one collaborator and Salad and Seafood to another one, and so on, so that you can check the totals in the UI. Even, the label statistics can give you more details of annotations for each label (hence for each team member), note that statistics are only available in AutoML Vision Object Detection UI.
An automated approach, in case you are interested in, is Human Labeling Service; according to documentation, currently, it is only available by email because of the Coronavirus (COVID-19) measures
If recommendations above don't fit your needs, you could always file a Feature Request for asking the desired functionality and add the required details.

Unablee toexport trained model from AutoML vision

I trained a model using google AutoML Vision and now I want to export it to use it locally, I tried this tutorial from Google official doc with no success.
Actually, in model list, when I click the three dots (more actions) there is no export option:
Even in the test & use page there is no option to export the model:
Thanks in advance,
First of all, the tutorial you are following is for AutoML tables and, although similar, is not exactly the same as for AutoML Vision.
For AutoML Vision you can train two types of models, Cloud hosted and Edge-exportable. As the name may infer, only the second ones can be exported.
Here you can see the documentation for exporting AutoML Vision Edge models.
My assumption is you have trained a Cloud hosted model which is not exportable.
There is currently a feature request opened to allow this behavior. You can find it here. If you would also be interested on it you can star it to keep updated about the progress.

Google automl train using one account and execute translations with another?

We are using the AutoML service in Google with highly trained models specific to our business. We are looking for a solution where we can train a model in a separate "training & testing" account, then somehow use or move that model into our production account.
Is this something that is possible? I.E. Export then import the model? Or some function built right into the platform where we can "move" a trained model from one account to another?
The reason for this, is we have a production budget for translation service usage, but the training of the model falls outside of that cost. We want to physically separate this activity in platform if possible.
Thanks.
According to the docs, you can export a model o export your custom model to Cloud Storage, download the model to your server, and then use Docker to make the model available for predictions.
After that, you have to download your exported model from Cloud Storage and start the Docker container, so your model is ready to receive prediction requests in another project.
https://cloud.google.com/automl-tables/docs/model-export

Changing preprocessing in trained model on SageMaker

I have trained model on SageMaker together with prerocessing. By preprocessing I mean I added the inference.py file with input_handler and output_handler functions according to this https://github.com/aws/sagemaker-python-sdk/blob/master/src/sagemaker/tensorflow/deploying_tensorflow_serving.rst.
I works nice but the problem is that everytime I want to change something in the preprocessing I have to retrain the model. Is there maybe some other to to do this without retraining?
A trained model is simply a function that gets arguments (the input vector) and returns an output (the output vector/value). If you changed the input with your modified pre-processing, you need to change the implementation of your function. This means that you need to retrain your model.
Retraining your models is a good habit, even if you don't change anything in your pre-processing, as the input is changing in time. The classical example of house prices is highlighting that your model is only good for the data that you trained on. If after a couple of years the market has changed, you have to retrained your model.
Some models are being retrained every day. Amazon SageMaker makes it easy to train your model, but calling the train API, and waiting for it to finish. You can automate the process of building a new Docker image (if you changed your pre-processing), calling the training API, and then calling the deployment API to SageMaker to ECS/EKS or any other container hosting service.