Access the action distribution network in Ray's RLlib library - ray

I would like to train a Proximal Policy Optimization (PPO) type model using RLlib and then serve the action distribution model using Tensorflow Lite or the equivalent PyTorch technology. I am interested in determining the ranking of actions and not just the policy.
Is there a way to extract this network from the trained policy? This tutorial shows how to train a PPO algorithm. Is there a simple mechanism to extract the desired neural network from the algo object so that I can convert it to a form for efficient serving in an AWS lambda?

Related

Distributed Spark on Amazon SageMaker

I have built a SparkML collaborative filtering algorithm that I want to train and deploy on Sagemaker. What is the best way to achieve this other than BYOC?
Also, I want to understand how distributed training works in Sagemaker if we go with the BYOC route.
I have tried to look for good resources on this, but documentation is pretty sparse on distributed aspect. You can provide instance_count in your Estimator but how is it used in BYOC scenario? Do we have to handle it in the training scripts, code ? Any example of doing that with SparkML?

Google AutoML Vision API and Google Vision API Custom Algorithm

I am looking at Google AutoML Vision API and Google Vision API. I know that if you use Google AutoML Vision API that it is a custom model because you train ML models based on your own images and define your own labels. And when using Google Vision API, you are using a pretrained model...
However, I am wondering if it is possible to use my own algorithm (one which I created and not provided by Google) and using that instead with Vision / AutoML Vision API ? ...
Sure, you can definitely deploy your own ML algorithm on Google Cloud, without being tied up to the Vision or AutoML API.
Two approaches that I have used many times for this same use case:
Serverless approach, if your model is relatively light in terms of computational resources requirement - Deploy your own custom cloud function. More info here.
To be more specific, the way it works is that you just call your cloud function, passing your image directly (base64 or pointing to a storage location). The function then automatically allocates all required resources (automatically), run your custom algorithm to process the image and/or run inferences, send the results back and vanishes (all resources released, no more running costs). Neat :)
Google AI Platform. More info here
Use AI Platform to train your machine learning models at scale, to host your trained model in the cloud, and to use your model to make predictions about new data.
In doubt, go for AI Platform, as the whole pipeline is nicely lined-up for any of your custom code/models. Perfect for deployment in production as well.

Create a Multi Model Endpoint using AWS Sagemaker Boto

I have created 2 models which are not too complex and renamed them and placed them into a same location in S3 bucket.
I need to create a multi model endpoint such that the 2 models have a same end point.
The model i am using is AWS in built Linear-learner model type regressor.
I am stuck as to how they should be deployed.
SageMaker's Linear Learner algorithm container does not currently implement the requirements for multi-model endpoints. You could request support in the AWS Forums.
You could also build your own version of the Linear Learner algorithm. To deploy the models to a multi-model endpoint you would need to build your own container that meets the requirements for multi-model endpoints and implement your own version of the Linear Learner algorithm. This sample notebook gives an example of how you would create your multi-model compatible container that serves MxNet models, but you could adapt it to implement a Linear Learner algorithm:
https://github.com/awslabs/amazon-sagemaker-examples/blob/master/advanced_functionality/multi_model_bring_your_own/multi_model_endpoint_bring_your_own.ipynb

Is it possible to undeploy a Google Cloud AutoML Vision Image Classification model when it's not being used?

I know how to undeploy/redeploy an Object Detection model - but my overall project uses both Object Detection and Image Classification. What's the best way to save money when we're not using both?
It's easy to remove the deployment of the Object Detection model, and then re-deploy it when we have data to process. Can the same be done for the Image Classification models?
In both Object Detection and Image Classification you pay based on resource usage.
Regarding your question, it’s important to take into account that you pay per node deployed as the model’s associated resources remain allocated in order to prevent delays in your predictions. That’s why in order to not incur charges when you are not using the service you should undeploy the models. You can do this in both Object Detection and Image Classification.

What is Google Clouds anomaly detection solution for time series streaming data similar to AWS' Kinesis Random Cut Forest algorithm?

Im trying to implement an anomaly detection machine learning solution on GCP but finding it hard to find a specific solution using Google Cloud ML as with AWS' Random Cut Forest solution in Kinesis. Im streaming IoT temperature sensor data for water heaters.
Anyone know a tensorflow/google solution for this as my company only uses google stack?
Ive tried using sklearn models but none of them are implementable on producton for streaming data so have to use tensorflow but am novice. Any suggestions on a good flow to get this done?
I would suggest using Esper complex event processing engine if primary concern is the analysis of data stream and catching patterns in real time. It provides SQL like event processing language which runs as continuous query on floating data. Esper offers abstractions for correlation, aggregation and pattern detection. It is open source project and license is required if you want to run engine on multiple servers to achieve high availability.