Saving a file in AWS filesystem - python-2.7

Hi I am trying out opencv in AWS lambda. I want to save a SVM model in txt file so that I can load it again. Is it possible to save it in tmp directory and load it from there whenever I need it or will I have to use s3?
I am using python and trying to do something like this:
# saving the model
svm.save("/tmp/svm.dat")
# Loading the model
svm = cv2.ml.SVM_load("/tmp/svm.dat")

Its not possible as Lambda execution environment is distributed and therefore the same function might run on several different instances.
The alternative is to save your svm.dat to S3 and then download it every time you start your lambda function.

Related

Specify checkpoint path in custom docker image in SageMaker

I am training a model on SageMaker using a custom docker image.
I need to specify the local path (the one in the container) used to store checkpoints, so that SageMaker can copy its output to S3.
According to the documentation here https://docs.aws.amazon.com/sagemaker/latest/dg/model-checkpoints.html , I can do that when I initialize the Estimator:
# The local path where the model will save its checkpoints in the training container
checkpoint_local_path="/opt/ml/checkpoints"
estimator = Estimator(
...
image_uri="<ecr_path>/<algorithm-name>:<tag>" # Specify to use built-in algorithms
output_path=bucket,
base_job_name=base_job_name,
# Parameters required to enable checkpointing
checkpoint_s3_uri=checkpoint_s3_bucket,
checkpoint_local_path=checkpoint_local_path
)
I'd like better to specify the checkpoint_local_path within the docker build. Is there a way to do that when building the image? Maybe using an environment variable? This would be also more consistent to what AWS recommend: *We recommend specifying the local paths as '/opt/ml/checkpoints' to be consistent with the default SageMaker checkpoint settings. *
unlike you don't like the /opt/ml/checkpoints name, you don't need to specify anything in your docker, apart from writing in /opt/ml/checkpoints (and reading from it if you're doing transfer learning or want to pickup from previously saved checkpoints)
Anything you write to /opt/ml/checkpoints in your container will be saved in S3 at the location you specify in checkpoint_s3_uri='s3://...'

AWS S3 filename

I’m trying to build application with backend in java that allows users to create a text with images in it (something like a a personal blog). I’m planning to store these images to s3 bucket. When uploading image files to bucket i’m hashing the original name and store the hashed one in the bucket. Images are for display purpose only, no user will be able to download them. Frontend displays these images by getting a path to them from the server. So the question is, is there any need to store original name of the image file in the database? And what are the reasons, if any, of doing so?
I guess in general it is not needed because what is more important is how these resources are used or managed in the system.
Assuming your service is something like data access (similar to google drive), I don't think it's necessary to store it in DB, unless you want to make faster search queries.

AWS Lambda - How to Put ONNX Models in AWS Layers

Currently, I have been downloading my ONNX models from S3 like so:
s3 = boto3.client('s3')
if os.path.isfile('/tmp/model.onnx') != True:
s3.download_file('test', 'models/model.onnx', '/tmp/model.onnx')
inference_session = onnxruntime.InferenceSession('/tmp/model.onnx')
However, I want to decrease the latency of having to download this model. To do so, I am looking to save the model in AWS Lambda layers. However, I'm having trouble doing so.
I tried creating a ZIP file as so:
- python
- model.onnx
and loading it like inference_session = onnxruntime.InferenceSession('/opt/model.onnx') but I got a "File doesn't exist" error. What should I do to make sure that the model can be found in the /opt/ directory?
Note: My AWS Lambda function is running on Python 3.6.
Your file should be in /opt/python/model.onnx. Therefore, you should be able to use the following to get it:
inference_session = onnxruntime.InferenceSession('/opt/python/model.onnx')
If you don't want your file to be in python folder, then don't create layer with such folder. Just have model.onnx in the zip's root folder, rather then inside the python folder.

Django how to upload file directly to 3rd-part storage server, like Cloudinary, S3

Now, I have realized the uploading process is like that:
1. Generate the HTTP request object, and set the value to request.FILE by using uploadhandler.
2. In the views.py, the instance of FieldFile which is the mirror of FileField will call the storage.save() to upload file.
So, as you see, django always use the cache or disk to pass the data, if your file is too large, it will cost too much time.
And the design I want to figure this problem is to custom an uploadhandler which will call storage.save() by using input raw data. The only question is how can I modify the actions of FileField?
Thanks for any help.
you can use this package
Add direct uploads to AWS S3 functionality with a progress bar to file input fields.
https://github.com/bradleyg/django-s3direct
You can use one of the following packages
https://github.com/cloudinary/pycloudinary
http://django-storages.readthedocs.io/en/latest/backends/amazon-S3.html

How to make parameters available to SageMaker Tensorflow Endpoint

I'm looking to make some hyper parameters available to the serving endpoint in SageMaker. The training instances is given access to input parameters using hyperparameters in:
estimator = TensorFlow(entry_point='autocat.py',
role=role,
output_path=params['output_path'],
code_location=params['code_location'],
train_instance_count=1,
train_instance_type='ml.c4.xlarge',
training_steps=10000,
evaluation_steps=None,
hyperparameters=params)
However, when the endpoint is deployed, there is no way to pass in parameters that are used to control the data processing in the input_fn(serialized_input, content_type) function.
What would be the best way to pass parameters to the serving instance?? Is the source_dir parameter defined in the sagemaker.tensorflow.TensorFlow class copied to the serving instance? If so, I could use a config.yml or similar.
Ah i have had a similar problem to you where I needed to download something off S3 to use in the input_fn for inference. In my case it was a dictionary.
Three options:
use your config.yml approach, and download and import the s3 file from within your entrypoint file before any function declarations. This would make it available to the input_fn
Keep using the hyperparameter approach, download and import the vectorizer in serving_input_fn and make it available via a global variable so that input_fn has access to it.
Download the file from s3 before training and include it in the source_dir directly.
Option 3 would only work if you didnt need to make changes to the vectorizer seperately after initial training.
Whatever you do, don't download the file directly in input_fn. I made that mistake and the performance is terrible as each invoking of the endpoint would result in the s3 file being downloaded.
The Hyper-parameters are used in the training phase to allow you to tune (Hyper-Parameters Optimization - HPO) your model. Once you have a trained model, these hyper-parameters are not needed for inference.
When you want to pass features to the serving instances you usually do that in the BODY of each request to the invoke-endpoint API call (for example see here: https://docs.aws.amazon.com/sagemaker/latest/dg/tf-example1-invoke.html) or the call to the predict wrapper in the SageMaker python SDK (https://github.com/aws/sagemaker-python-sdk/tree/master/src/sagemaker/tensorflow). You can see such examples in the sample notebooks (https://github.com/awslabs/amazon-sagemaker-examples/blob/master/advanced_functionality/tensorflow_iris_byom/tensorflow_BYOM_iris.ipynb)
Yes, one option is to add your configuration file to source_dir and load the file in the input_fn.
Another option is to use serving_input_fn(hyperparameters). That function transforms the TensorFlow model in a TensorFlow serving model. For example:
def serving_input_fn(hyperparameters):
# gets the input shape from the hyperparameters
shape = hyperparameters.get('input_shape', [1, 7])
tensor = tf.placeholder(tf.float32, shape=shape)
# returns the ServingInputReceiver object.
return build_raw_serving_input_receiver_fn({INPUT_TENSOR_NAME: tensor})()
tensorflow amazon-sagemaker hyperparameters tensorflow-serving