how to increase performance in aws comprehend on custom classification - amazon-web-services

I trained a custom classifier with simply two tag in CSV
I have feed my custom classification model with 1000 text each
but when I run a job in my custom classification model, the job take ~5 min (running) for analyses one new text, I search about this issue in AWS, but I don't find any answer...
How can I speed up / optimize my job for analysis new text with the model ?
Thank you in advance

Prior to Nov 2019, Comprehend only supported asynchronous inference for Custom classification. Asynchronous inference is optimized for bulk processing.
Comprehend has since launched real-time inference for Custom classification to satisfy the real-time needs of our customers.
https://docs.aws.amazon.com/comprehend/latest/dg/custom-sync.html
Note that Custom endpoints are charged by time units even when you're not actively using them. You can also look at the pricing document for details - https://aws.amazon.com/comprehend/pricing/

Related

Is it possible to visualize the data on SageMaker processing job?

I am working on creating a custom Sagemaker processing job that transforms my dataset. I want to plot the data matrix before and after the job i.e., visualize the job. It is possible for me to create another processing job that does this plotting. However, I prefer the job to be self-contained.
Only one option that might fit my needs comes to my mind which is monitoring through regular expressions as in plotting learning curves like here: Monitor and Analyze Training Jobs Using Amazon CloudWatch Metrics. It is pretty tedious to plot a matrix of dimensions say 10k * ~300 columns like that. So, I wonder about more native ways to do this task.
Thanks in advance
There is no any way by default to visualize for jobs on progress. You can publish metric you want to cloudwatch and visualize on cloudwatch or you some external applications like neptune-ai

Custom Classifier - AWS Comprehend - Alternative Options

We were looking to use AWS Comprehend custom classifier but its pricing seems way high as it starts charging the moment is put and even if not used ("Endpoints are billed on one second increments, with a minimum of 60 seconds. Charges will continue to incur from the time you start the endpoint until it is deleted even if no documents are analyzed.")
So, we need the feature but would like to see if there is an alternate way to use the classifiers we have.
Any ideas?
Comprehend supports both synchronous and asynchronous inference on custom classifiers. Synchronous inference provides sub-second response time but requires setting up a custom endpoint to host the model and is charged on uptime.
Asynchronous inference (StartDocumentClassificationJob) usually takes a few minutes to an hour dependent on the amount of data being processed and is billed based on data volume (1 billing units = 100 characters).

Training multiple model in AWS Sagemaker

Can I train multiple model in AWS Sagemaker by evaluating the models is train.py script and also how to get back multiple metrics from multiple models?
Any links, docs or videos would be useful.
Yes, what you write in a sagemaker training script (assuming you use something that lets you pass custom code like your own container or a framework container) is flexible, and does not need to be just one model or even ML. You can definitely write multiple model trainings in a single container, and pull all related metrics using SageMaker metric capture via regex, see an example regex here with the Sklearn random forest.
That being said, it is often a better idea to separate things and have one model per SageMaker job, because of the following reasons among other:
It allows you to separate model metadata and metrics and compare
them easily with the SageMaker metadata service
It allows you to specialize hardware to each model and get better economics. Each model has its own sweet spot when it comes to CPU, GPU, RAM
It allows you to use the exact same container for single training but
also for bayesian hyperparameter search, an method that can be
both faster and cheaper than regular gridsearch.

Google Cloud Platform

I am building a classification model using AutoML and I have some basic usage questions about the GCP.
1 - Data privacy question; if we save behavior data to train our model in BigQuery, does Google have access to that data? Could Google ever use that data to learn more about behavior of individuals we collected data from?
2 - Since training costs are charged by the hour, I would like to understand the relationship between data and training time. Does the time increase linearly with the size of the training data set? For example, we trained a classification using 1.7MB of data and it took 3 hrs. So, would training a model with 17MB of data take 30 hours?
3 - A batch prediction costs 1.16 USD per hour. However, our data is in a csv and it seems that we cannot upload a csv to do a batch prediction. So, we will try using the API. Therefore I have two questions: A) can we do a batch upload using the API and B) what are the associated costs?
4 - What exactly is an online prediction?
5 - When using the cost calculator (for machine learning), what is a node hour?
1- As is mentioned in the Data Usage FAQ, Google does not use any of your content for any purpose except to provide you with the Cloud AutoML service.
2- The time required to train your model depends on the size and complexity of your training data, for detailed explanation take a look at the Vision documentation for example.
3- You need to upload your csv file to Google Cloud Storage and then you can use it in the API or any of the available client libraries. See Natural Language batch prediction, for example. For costs, check the documentation for the desired product. AutoML pricing depends on what feature you are using: Vision, Natural Language, Translation, Video Intelligence.
4- After you have created (trained) a model, you can deploy the model and request online (single, low-latency and real-time) predictions. Online predictions accept one row of data and provide a predicted result based on your model for that data. You use online predictions when you need a prediction as input for your business logic flow.
5- You can think of node as a single Virtual Machine, which resources are used for computing purposes. Machine types are different depending the product and purpose for which they are used. For example in image classification, the cost for AutoML Vision Image Classification model training is $3.15 per node hour, each node is equivalent to a n1-standard-8 machine with an attached NVIDIA Tesla V100.GPU. Then, node hour are the resources of such node used by one hour.

What is Google Clouds anomaly detection solution for time series streaming data similar to AWS' Kinesis Random Cut Forest algorithm?

Im trying to implement an anomaly detection machine learning solution on GCP but finding it hard to find a specific solution using Google Cloud ML as with AWS' Random Cut Forest solution in Kinesis. Im streaming IoT temperature sensor data for water heaters.
Anyone know a tensorflow/google solution for this as my company only uses google stack?
Ive tried using sklearn models but none of them are implementable on producton for streaming data so have to use tensorflow but am novice. Any suggestions on a good flow to get this done?
I would suggest using Esper complex event processing engine if primary concern is the analysis of data stream and catching patterns in real time. It provides SQL like event processing language which runs as continuous query on floating data. Esper offers abstractions for correlation, aggregation and pattern detection. It is open source project and license is required if you want to run engine on multiple servers to achieve high availability.