I am trying to deploy a simple support vector regression (SVR) model to Vertex AI using scikit-learn version 1.0, but I am encountering the following error:
Failed to create endpoint "adpool_eval" due to the error: model server container out of memory,
please use a larger machine type for model deployment:
https://cloud.google.com/vertex-ai/docs/predictions/configure-compute#machine-types.
The model is saved as "model.joblib" in a cloud storage bucket as required by Vertex AI.
The model is very small and simple so I can't understand how it is running out of memory. I tried machines with much more RAM and CPUs and the issue still persists. The cloud storage bucket, model registry, and endpoint are all in the same region (europe-west1).
The issue seems to be with saving the model using the joblib library. I used the pickle library instead and I managed to deploy the same model without any issues on the n1-standard machine.
Related
I'm running a training job using AWS SageMaker and i'm using a custom Estimator based on an available docker image from AWS. I wanted to get some feedback on whether my process is correct or not prior to deployment.
I'm running the training job in a docker container using 'local' in a SageMaker notebook instance and the training job runs successfully. However, after the job completes and saves the model to opt/model/models within the docker image, once the docker container exits, the model saved from training is lost. Ideally, i'd like to use the model for inference, however, I'm not sure about the best way of doing it. I have also tried the training job after pushing the image to ECR, but the same thing happens.
It is my understanding that the docker state is lost, once the image exits, as such, is it possible to persist the model that was produced in training in the image? One option I have thought about is saving the model output to an S3 bucket once the training job is complete, then pulling that model into another docker image for inference. Is this expected behaviour and the correct way of doing it?
I am fairly new to using SageMaker but i'd like to do it according to best practices. I've looked at a lot of the AWS documents and followed the tutorials but it doesn't seem to mention explicitly if this is how it should be done.
Thanks for any feedback on this.
You can refer to Rok's comment on saving a model file when you're using a custom estimator. That said, SageMaker built-in estimators save the model artifacts to S3. To make inferences using that model, you can either use a real-time inference endpoint for real time predictions, or a batch transformer to run inferences in batch mode. In both cases, you'll have to point the configuration to the container for inference and the model artifacts. the amazon-sagemaker-examples repository has examples for common frameworks, especially, the scikit-learn example has detailed explanations.
Also, make sure the model is being saved to /opt/ml/model/, not opt/model/models as mentioned in your question.
I am attempting to train an AI model using Google Vertex AI.
I am using a custom container to and starting with specified machineType and acceleratorType.
My application requires nvidia-driver >455, but the machines come with 450.51.06.
Is there a way to update the nvidia driver version on the machine running my custom container?
Because of certain VPC restrictions I am forced to use custom containers for predictions for a model trained on Tensorflow. According to the documentation requirements I have created a HTTP server using Tensorflow Serving. The Dockerfile used to build the image is as follows:
FROM tensorflow/serving:2.4.1-gpu
# copy the model file
ENV MODEL_NAME=my_model
COPY my_model /models/my_model
Where my_model contains the saved_model inside a folder named 1/.
I have then pushed this image to a Google Container Repository and then created a Model by using Import an existing custom container and changing the Port to 8501. However when trying to deploy the model to an endpoint using a single compute node of type n1-standard-16 and 1 P100 GPU the deployment runs into the below error:
Failed to create session: Internal: cudaGetDevice() failed. Status: CUDA driver version is insufficient for CUDA runtime version
I am unable to figure how this is happening. I am able to run the same docker image on my local machine and I am able to successfully get predictions by hitting the endpoint that is created: http://localhost:8501/v1/models/my_model:predict
Any help is this regard will be appreciated.
The issue has been solved by downgrading Tensorflow serving image to 2.3.0-gpu version. According to the error context, the CUDA driver in the custom model image doesn't match the appropriate driver version in GCP AI Platform training cluster.
I am a newbie in AWS. Right now I have defined an image segmentation function in SageMaker notebook instance and this will return masks.
I didn't train my models there, what I have done is pip install models packages there, upload pre-trained weights manually. The rest is very similar to working in local machine: I imported package, load the weights, defined a function to take an image as input then outputs masks.
My question is: is there a way to host my function so that I can call it with URL endpoint + one image info, then it returns me masks in response?
Again I am so new to AWS and I begin to doubt SageMaker is not designed for this job... The reason I chose SageMaker is the need of computing capacity, I don't think I can do this job with pure lambda.
SageMaker inference endpoints currently rely on an interface based on Docker images. At the base level, you can set up a Docker image that runs a web server and responds to the endpoints on the ports that AWS require. This guide will show you how to do it: https://docs.aws.amazon.com/sagemaker/latest/dg/your-algorithms-inference-code.html.
This is an annoying amount of work. If you're using a well-known framework they have a container library that contains some boilerplate code you might be able to reuse: https://github.com/aws/sagemaker-containers. You might have to do some customization there.
Or don't use SageMaker inference endpoints at all :) If your model can fit within the size / memory restrictions of AWS Lambda, that is an easier option!
Full disclaimer, I'm working on a platform that competes with SageMaker: Model Zoo
I have a model.pkl file which is pre-trained and all other files related to the ml model. I want it to deploy it on the aws sagemaker.
But without training, how to deploy it to the aws sagmekaer, as fit() method in aws sagemaker run the train command and push the model.tar.gz to the s3 location and when deploy method is used it uses the same s3 location to deploy the model, we don't manual create the same location in s3 as it is created by the aws model and name it given by using some timestamp. How to put out our own personalized model.tar.gz file in the s3 location and call the deploy() function by using the same s3 location.
All you need is:
to have your model in an arbitrary S3 location in a model.tar.gz archive
to have an inference script in a SageMaker-compatible docker image that is able to read your model.pkl, serve it and handle inferences.
to create an endpoint associating your artifact to your inference code
When you ask for an endpoint deployment, SageMaker will take care of downloading your model.tar.gz and uncompressing to the appropriate location in the docker image of the server, which is /opt/ml/model
Depending on the framework you use, you may use either a pre-existing docker image (available for Scikit-learn, TensorFlow, PyTorch, MXNet) or you may need to create your own.
Regarding custom image creation, see here the specification and here two examples of custom containers for R and sklearn (the sklearn one is less relevant now that there is a pre-built docker image along with a sagemaker sklearn SDK)
Regarding leveraging existing containers for Sklearn, PyTorch, MXNet, TF, check this example: Random Forest in SageMaker Sklearn container. In this example, nothing prevents you from deploying a model that was trained elsewhere. Note that with a train/deploy environment mismatch you may run in errors due to some software version difference though.
Regarding your following experience:
when deploy method is used it uses the same s3 location to deploy the
model, we don't manual create the same location in s3 as it is created
by the aws model and name it given by using some timestamp
I agree that sometimes the demos that use the SageMaker Python SDK (one of the many available SDKs for SageMaker) may be misleading, in the sense that they often leverage the fact that an Estimator that has just been trained can be deployed (Estimator.deploy(..)) in the same session, without having to instantiate the intermediary model concept that maps inference code to model artifact. This design is presumably done on behalf of code compacity, but in real life, training and deployment of a given model may well be done from different scripts running in different systems. It's perfectly possible to deploy a model with training it previously in the same session, you need to instantiate a sagemaker.model.Model object and then deploy it.