Changing preprocessing in trained model on SageMaker - amazon-web-services

I have trained model on SageMaker together with prerocessing. By preprocessing I mean I added the inference.py file with input_handler and output_handler functions according to this https://github.com/aws/sagemaker-python-sdk/blob/master/src/sagemaker/tensorflow/deploying_tensorflow_serving.rst.
I works nice but the problem is that everytime I want to change something in the preprocessing I have to retrain the model. Is there maybe some other to to do this without retraining?

A trained model is simply a function that gets arguments (the input vector) and returns an output (the output vector/value). If you changed the input with your modified pre-processing, you need to change the implementation of your function. This means that you need to retrain your model.
Retraining your models is a good habit, even if you don't change anything in your pre-processing, as the input is changing in time. The classical example of house prices is highlighting that your model is only good for the data that you trained on. If after a couple of years the market has changed, you have to retrained your model.
Some models are being retrained every day. Amazon SageMaker makes it easy to train your model, but calling the train API, and waiting for it to finish. You can automate the process of building a new Docker image (if you changed your pre-processing), calling the training API, and then calling the deployment API to SageMaker to ECS/EKS or any other container hosting service.

Related

Training multiple model in AWS Sagemaker

Can I train multiple model in AWS Sagemaker by evaluating the models is train.py script and also how to get back multiple metrics from multiple models?
Any links, docs or videos would be useful.
Yes, what you write in a sagemaker training script (assuming you use something that lets you pass custom code like your own container or a framework container) is flexible, and does not need to be just one model or even ML. You can definitely write multiple model trainings in a single container, and pull all related metrics using SageMaker metric capture via regex, see an example regex here with the Sklearn random forest.
That being said, it is often a better idea to separate things and have one model per SageMaker job, because of the following reasons among other:
It allows you to separate model metadata and metrics and compare
them easily with the SageMaker metadata service
It allows you to specialize hardware to each model and get better economics. Each model has its own sweet spot when it comes to CPU, GPU, RAM
It allows you to use the exact same container for single training but
also for bayesian hyperparameter search, an method that can be
both faster and cheaper than regular gridsearch.

Google automl train using one account and execute translations with another?

We are using the AutoML service in Google with highly trained models specific to our business. We are looking for a solution where we can train a model in a separate "training & testing" account, then somehow use or move that model into our production account.
Is this something that is possible? I.E. Export then import the model? Or some function built right into the platform where we can "move" a trained model from one account to another?
The reason for this, is we have a production budget for translation service usage, but the training of the model falls outside of that cost. We want to physically separate this activity in platform if possible.
Thanks.
According to the docs, you can export a model o export your custom model to Cloud Storage, download the model to your server, and then use Docker to make the model available for predictions.
After that, you have to download your exported model from Cloud Storage and start the Docker container, so your model is ready to receive prediction requests in another project.
https://cloud.google.com/automl-tables/docs/model-export

AWS Sagemaker - using cross validation instead of dedicated validation set?

When I train my model locally I use a 20% test set and then cross validation. Sagameker seems like it needs a dedicated valdiation set (at least in the tutorials I've followed). Currently I have 20% test, 10% validation leaving 70% to train - so I lose 10% of my training data compared to when I train locally, and there is some performance loss as a results of this.
I could just take my locally trained models and overwrite the sagemaker models stored in s3, but that seems like a bit of a work around. Is there a way to use Sagemaker without having to have a dedicated validation set?
Thanks
SageMaker seems to allow a single training set while in cross validation you iterate between for example 5 different training set each one validated on a different hold out set. So it seems that SageMaker training service is not well suited for cross validation. Of course cross validation is usually useful with small (to be accurate low variance) data, so in those cases you can set the training infrastructure to local (so it doesn't take a lot of time) and then iterate manually to achieve cross validation functionality. But it's not something out of the box.
Sorry, can you please elaborate which tutorials you are referring to when you say "SageMaker seems like it needs a dedicated validation set (at least in the tutorials I've followed)."
SageMaker training exposes the ability to separate datasets into "channels" so you can separate your dataset in whichever way you please.
See here for more info: https://docs.aws.amazon.com/sagemaker/latest/dg/your-algorithms-training-algo-running-container.html#your-algorithms-training-algo-running-container-trainingdata

Process online prediction request

When using ml-engine for online prediction we send a request and get the prediction results, that's cool but Request is usually different compared to model input, for example:
A categorical variable can be in request but model is expecting and integer mapped to that that category
also for a given feature we may need to create multiple features, like splitting text into two or more features
And we might need to exclude some of the features in the request like a constant feature that's useless for the model
How do you handle this process? My solution is to get the request with an appengine app, send it to pub/sub , process it in dataflow, save it to gcs and trigger a cloud function to send the processed request to ml-engine endpoint and get the predicted result. This can be an over-engineering and I want to avoid that, If you have any advice regarding to Xgboost models I'll be appreciated.
We are testing out a feature that allows a user to provide some Python code to be run server-side. This will allow you to do the types of transformations you are trying to do, either as a scikit learn pipeline or as a Python function. If you'd like to test it out, please contact cloudml-feedback#google.com.

Machine Learning (tensorflow / sklearn) in Django?

I have a django form, which is collecting user response. I also have a tensorflow sentences classification model. What is the best/standard way to put these two together.
Details:
tensorflow model was trained on the Movie Review data from Rotten Tomatoes.
Everytime a new row is made in my response model , i want the tensorflow code to classify it( + or - ).
Basically I have a django project directory and two .py files for classification. Before going ahead myself , i wanted to know what is the standard way to implement machine learning algorithms to a web app.
It'd be awesome if you could suggest a tutorial or a repo.
Thank you !
Asynchronous processing
If you don't need the classification result from the ML code to pass immediately to the user (e.g. as a response to the same POST request that submtted), then you can always queue the classification job to be ran in the background or even a different server with more CPU/memory resources (e.g. with django-background-tasks or Celery)
A queued task would be for example to populate the field UserResponse.class_name (positive, negative) on the database rows that have that field blank (not yet classified)
Real time notification
If the ML code is slow and want to return that result to the user as soon as it is available, you can use the asynchronous approach described above, and pair with the real time notification (e.g. socket.io to the browser (this can be triggered from the queued task)
This becomes necessary if ML execution time is so long that it might time-out the HTTP request in the synchronous approach described below.
Synchronous processing, if ML code is not CPU intensive (fast enough)
If you need that classification result returned immediately, and the ML classification is fast enough *, you can do so within the HTTP request-response cycle (the POST request returns after the ML code is done, synchronously)
*Fast enough here means it wouldn't time-out the HTTP request/response, and the user wouldn't lose patience.
Well, I had to develop the same solution myself. In my case, I used Theano. If you are using tensorflow or theano, you are able to save the model you have built. So first, train the model with your training dataset, then save the model using the library you have chosen. You need to deploy into your django web application only the part of your code that handles the prediction. So using a simple POST, you would give to the user the predicted class of your sentence quickly enough. Also, if you think is needed, you can run a job periodically to train your model again with the new input patterns and save it once more.
I would suggest not to use Django since it will add execution time to the solution.
Instead, you could use node to serve a Reactjs frontend that interacts with the TensorFlow rest API that functions as a standalone server.
As the answer above this post suggests, it will be better to use WebSockets, you could use a react WebSocket module so it will refresh your components once the state of the component changes.
Hope this helps.