sagemaker - factorization machines - deserialize model - amazon-web-services

I estimated a factorization machine model in sagemaker and it saved a file model.tar.gz into an s3 folder.
Is there a way I can load this file in Python and access the parameter of the model, i.e. the factors, directly?
Thanks

As of April 2019: yes. An official AWS blog post was created to show how to open the SageMaker Factorization Machines artifact and extract its parameters: https://aws.amazon.com/blogs/machine-learning/extending-amazon-sagemaker-factorization-machines-algorithm-to-predict-top-x-recommendations/
That being said, be aware that Amazon SageMaker built-in algorithm are primarily built for deployment on AWS, and only SageMaker XGBoost and SageMaker BlazingText are designed to produce artifacts interoperable with their open-source equivalent.

Related

AWS SageMaker - Upload our own docker image

I am new to AWS SageMaker and i am using this technology for building and training the machine learning models. I have now developed a docker image which contains our custom code for tensorflow. I would like to upload this custom docker image to AWS SageMaker and make use of it.
I have searched various links but could not find proper information on how to upload our own custom docker image.
Can you please suggest me the recommended links regarding the process of uploading our own docker image to AWS SageMaker?
In order to work with sagemaker, you have to push your container to ECR. The most important thing is that the container must be "adapted" to be complaint to what sagemaker requires, but everything is described here. In addition if you want to take a look to an example, here is mine.. where I use my container with TF Object Detection API in AWS Sagemaker.

Amazon SageMaker multiple-models

I am interested when using Amazon Sagemaker multiple-models options running on one endpoint. How does it look in practice? When I send more requests on different models, can Sagemaker deal with this simultaneously?
Thank you.
You need to specify which model in the request body. The model name specified when creating the sagemaker model.
Invoke a Multi-Model Endpoint
response = runtime_sm_client.invoke_endpoint(
EndpointName = ’my-endpoint’,
ContentType = 'text/csv',
TargetModel = ’new_york.tar.gz’,
Body = body)
Save on inference costs by using Amazon SageMaker multi-model endpoints
There are multiple limitations. Currently the sagemaker multi model server (MMS) cannot use GPU.
Host Multiple Models with Multi-Model Endpoints
Multi-model endpoints are not supported on GPU instance types.
The SageMaker Python SDK is not clear which framework model supports the multi model server deployment and how. For instance with Use TensorFlow with the SageMaker Python SDK, the SageMaker endpoint docker image is automatically picked up by SageMaker using the images in Available Deep Learning Containers Images. However it is not clear which framework images are MMS ready.
[Deploy Multiple ML Models on a Single Endpoint Using Multi-model Endpoints on Amazon SageMaker] explains building AWS XGBoost image with MMS. Hence apparently the docker image needs to be built with MMS being specified as the front-end. If the images are not built in such a way, MMS may not be available.
Such information is missing in AWS, so if there is an issue encountered, you would need AWS support to identify the cause. Especially SageMaker team keeps changing the images, MMS implementation, etc, there can be issues expected.
References
SageMaker Inference Toolkit
Multi Model Server
Deploy Multiple ML Models on a Single Endpoint Using Multi-model Endpoints on Amazon SageMaker

AWS SageMaker on GPU

I am trying to train a neural network (Tensorflow) on AWS. I have some AWS credits. From my understanding AWS SageMaker is the one best for the job. I managed to load the Jupyter Lab console on SageMaker and tried to find a GPU kernel since, I know it is the best for training neural networks. However, I could not find such kernel.
Would anyone be able to help in this regard.
Thanks & Best Regards
Michael
You train models on GPU in the SageMaker ecosystem via 2 different components:
You can instantiate a GPU-powered SageMaker Notebook Instance, for example p2.xlarge (NVIDIA K80) or p3.2xlarge (NVIDIA V100). This is convenient for interactive development - you have the GPU right under your notebook and can run code on the GPU interactively and monitor the GPU via nvidia-smi in a terminal tab - a great development experience. However when you develop directly from a GPU-powered machine, there are times when you may not use the GPU. For example when you write code or browse some documentation. All that time you pay for a GPU that sits idle. In that regard, it may not be the most cost-effective option for your use-case.
Another option is to use a SageMaker Training Job running on a GPU instance. This is a preferred option for training, because training metadata (data and model path, hyperparameters, cluster specification, etc) is persisted in the SageMaker metadata store, logs and metrics stored in Cloudwatch and the instance automatically shuts down itself at the end of training. Developing on a small CPU instance and launching training tasks using SageMaker Training API will help you make the most of your budget, while helping you retain metadata and artifacts of all your experiments. You can see here a well documented TensorFlow example
All Notebook GPU and CPU instance types: AWS Documentation.

Hosting model on amazon aws

I have pre-trained model in Keras.I want to host the model on Amazon AWS for real-time prediction. Can someone list the steps to do this. I am very new to this.How to deploy my model for predictions?
You could package your own pre-trained algorithms by "containerizing" the algorithm via Docker. This documentation page will guide you through how to package your algorithm into a Docker image in Elastic Container Service: https://docs.aws.amazon.com/sagemaker/latest/dg/your-algorithms.html
You may then directly deploy your packaged algorithm via SageMaker Hosting. This is a three-step process: CreateModel -> CreateEndpointConfig -> CreateEndpoint. Here's the documentation about how to host your packaged algorithm on SageMaker: https://docs.aws.amazon.com/sagemaker/latest/dg/how-it-works-hosting.html
Cheers,
Yuting

Pros and Cons of Amazon SageMaker VS. Amazon EMR, for deploying TensorFlow-based deep learning models?

I want to build some neural network models for NLP and recommendation applications. The framework I want to use is TensorFlow. I plan to train these models and make predictions on Amazon web services. The application will be most likely distributed computing.
I am wondering what are the pros and cons of SageMaker and EMR for TensorFlow applications?
They both have TensorFlow integrated.
In general terms, they serve different purposes.
EMR is when you need to process massive amounts of data and heavily rely on Spark, Hadoop, and MapReduce (EMR = Elastic MapReduce). Essentially, if your data is in large enough volume to make use of the efficiencies of Spark, Hadoop, Hive, HDFS, HBase and Pig stack then go with EMR.
EMR Pros:
Generally, low cost compared to EC2 instances
As the name suggests Elastic meaning you can provision what you need when you need it
Hive, Pig, and HBase out of the box
EMR Cons:
You need a very specific use case to truly benefit from all the offerings in EMR. Most don't take advantage of its entire offering
SageMaker is an attempt to make Machine Learning easier and distributed. The marketplace provides out of the box algos and models for quick use. It's a great service if you conform to the workflows it enforces. Meaning creating training jobs, deploying inference endpoints
SageMaker Pros:
Easy to get up and running with Notebooks
Rich marketplace to quickly try existing models
Many different example notebooks for popular algorithms
Predefined kernels that minimize configuration
Easy to deploy models
Allows you to distribute inference compute by deploying endpoints
SageMaker Cons:
Expensive!
Enforces a certain workflow making it hard to be fully custom
Expensive!
From AWS documentation:
Amazon EMR is a managed cluster platform that simplifies running big data frameworks, such as Apache Hadoop and Apache Spark, on AWS to process and analyze vast amounts of data. By using these frameworks and related open-source projects, such as Apache Hive and Apache Pig, you can process data for analytics purposes and business intelligence workloads. Additionally, you can use Amazon EMR to transform and move large amounts of data into and out of other AWS data stores and databases, such as Amazon Simple Storage Service (Amazon S3) and Amazon DynamoDB.
(...) Amazon SageMaker is a fully-managed platform that enables developers and data scientists to quickly and easily build, train, and deploy machine learning models at any scale. Amazon SageMaker removes all the barriers that typically slow down developers who want to use machine learning.
Conclussion:
If you want to deploy AI models just use AWS SageMaker