Docker file for spacy and scispacy model deployment in AWS sagemaker - dockerfile

Can somebody share the sample docker file for spacy, scispacy libraries and there corresponding models in AWS Sagemaker under category of bringing your own pre trained models

Related

Update a SageMaker Endpoint After Running a Training Job from a Lambda Function

What I am trying to do is automate model retraining, after running the training job from the lambda function, the model artifacts are saved to an S3 bucket.
now what I want to do is to update an existing endpoint with the newly trained model.
any documentation, tutorials, or code examples would be useful.
This can be done with the SageMaker Model Registry. You can register the output of your training job and automate the deployment by integrating CI/CD.
After you've created a model group in the registry, you can use SageMaker Studio to set up a deployment pipeline for you.
First create a project:
Select the deployment option:
This will automatically create a CodePipeline and a CodeCommit repo. The repo houses the build spec and the default README.md has some good info on what SageMaker has created for you. Even if you want to customize the CI/CD portion, the default SageMaker projects are a good place to start.
This walkthrough of a SageMaker project might be useful for you. The model registry plays nicely with SageMaker pipelines as you can simply add a step to register a trained model.

what is the difference between using a hugging face estimator with training script and directly using a notebook in AWS sagemaker?

in tutorials like Fine-tuning a pytorch bert model and deploying it with sagemaker and fine-tune and host huggingface models on sagemaker, a hugging face estimator is used to call a training script. What would be the difference if I just directly ran the script's code in the notebook itself? is it because the estimator makes it easier to deploy the model?
You could run the script in the notebook itself but it would not deploy with SageMaker provided capabilities then. The estimator that you are seeing is what specifies to SageMaker what framework you are using and the training script that you are passing in. If you ran the script code in the notebook that would be like training in your local environment. By passing in the script to the Estimator you are running a SageMaker training job. The estimator is meant to encapsulate training on SageMaker.
SageMaker Estimator Overview: https://sagemaker.readthedocs.io/en/stable/overview.html

AWS SageMaker - Upload our own docker image

I am new to AWS SageMaker and i am using this technology for building and training the machine learning models. I have now developed a docker image which contains our custom code for tensorflow. I would like to upload this custom docker image to AWS SageMaker and make use of it.
I have searched various links but could not find proper information on how to upload our own custom docker image.
Can you please suggest me the recommended links regarding the process of uploading our own docker image to AWS SageMaker?
In order to work with sagemaker, you have to push your container to ECR. The most important thing is that the container must be "adapted" to be complaint to what sagemaker requires, but everything is described here. In addition if you want to take a look to an example, here is mine.. where I use my container with TF Object Detection API in AWS Sagemaker.

Amazon SageMaker multiple-models

I am interested when using Amazon Sagemaker multiple-models options running on one endpoint. How does it look in practice? When I send more requests on different models, can Sagemaker deal with this simultaneously?
Thank you.
You need to specify which model in the request body. The model name specified when creating the sagemaker model.
Invoke a Multi-Model Endpoint
response = runtime_sm_client.invoke_endpoint(
EndpointName = ’my-endpoint’,
ContentType = 'text/csv',
TargetModel = ’new_york.tar.gz’,
Body = body)
Save on inference costs by using Amazon SageMaker multi-model endpoints
There are multiple limitations. Currently the sagemaker multi model server (MMS) cannot use GPU.
Host Multiple Models with Multi-Model Endpoints
Multi-model endpoints are not supported on GPU instance types.
The SageMaker Python SDK is not clear which framework model supports the multi model server deployment and how. For instance with Use TensorFlow with the SageMaker Python SDK, the SageMaker endpoint docker image is automatically picked up by SageMaker using the images in Available Deep Learning Containers Images. However it is not clear which framework images are MMS ready.
[Deploy Multiple ML Models on a Single Endpoint Using Multi-model Endpoints on Amazon SageMaker] explains building AWS XGBoost image with MMS. Hence apparently the docker image needs to be built with MMS being specified as the front-end. If the images are not built in such a way, MMS may not be available.
Such information is missing in AWS, so if there is an issue encountered, you would need AWS support to identify the cause. Especially SageMaker team keeps changing the images, MMS implementation, etc, there can be issues expected.
References
SageMaker Inference Toolkit
Multi Model Server
Deploy Multiple ML Models on a Single Endpoint Using Multi-model Endpoints on Amazon SageMaker

Hosting model on amazon aws

I have pre-trained model in Keras.I want to host the model on Amazon AWS for real-time prediction. Can someone list the steps to do this. I am very new to this.How to deploy my model for predictions?
You could package your own pre-trained algorithms by "containerizing" the algorithm via Docker. This documentation page will guide you through how to package your algorithm into a Docker image in Elastic Container Service: https://docs.aws.amazon.com/sagemaker/latest/dg/your-algorithms.html
You may then directly deploy your packaged algorithm via SageMaker Hosting. This is a three-step process: CreateModel -> CreateEndpointConfig -> CreateEndpoint. Here's the documentation about how to host your packaged algorithm on SageMaker: https://docs.aws.amazon.com/sagemaker/latest/dg/how-it-works-hosting.html
Cheers,
Yuting