I'm training several models with multiple training datasets in experimenter. Is there a way to save the trained models? I know that in explorer you can right-click and save model and even load it up later. Thanks in advance.
Related
I want to make a website that gives a visualization of football game statistics.
Functionality: The user checks a list of games. Selects game to see details. Can select a particular player. If there is no game he/she is interested in, the user uploads a datafile of the game and adds it to the main list of games.
I have a script that cleans data and gives me DataFrame with columns:
['Shots', 'SCA', 'Touches', 'Pass', 'Carries', 'Press', 'Tackled', 'Interceptions', 'Blocks']
if I do the Django model is it ok if I simply do model with these columns, or do I need to do a proper database with separate tables like:
Is my UML is ok or do I need to fix something?
here is link to my UML https://drawsql.app/xml-team/diagrams/football-game-stats
You don't need to manually create tables and relations in your database, Django can take care of that for you.
It has been explained in detail here
I suggest following these simple steps for building Django Model. I always did by following these steps.
Create Django Application
Add the Posts Model
Update Settings
Make Migrations
Verify Database Schema
I'd like to do image classification. In my dataset, despite the fact that images features is a strong component for this classification (colors, shapes, etc), some categories of images will be hard to distinguish without interpreting the text inside the image.
I don't think VertexAI/AutoML will use pre-trained models in order to facilitate classification if in some case the only difference is the text. I know Google Vision/OCR is capable of doing such extraction. But is there a way to do image classification (VertexAI/AutoML) using Google Cloud Vision extraction as an additional image feature?
Currently my project uses 3 models (no google cloud):
model 1: classify an image using images features
model 2: classify an image, only using OCR + regex (same categories)
model 3: combine both models and decide when to use model 1 or model 2
I'd like to switch to Vertex AI the following will improve my project quality for the following:
AutoML classification seems very good for model 1
I need to use a tool to manage my datasets (Vertex AI managed dataset)
Vertex AI has interesting pipeline training features
If it is confirmed that AutoML won't perform well if some images categories only differs in the text, I would recreate a similar 3-tier models using Vertex AI custom training scripts. I can easily create model 1 with VertexAI/AutoML. However I have no idea if:
I can create model 2 with a vertex ai custom training script using google cloud vision/ocr to do image classification
I can create model 3 that would use models 1 and 2 created by vertex ai.
Could you give me recommendations on how to achieve that using Google Cloud Platform?
For this purpose, I recommend you the following:
1. model 2:
Keep your images in a GCS.
Use the Detect text in images | Cloud Vision API to generate your dataset (text) {"gcs":"gs://path_to_image/image_1","text":["text1"...]}.
Use AutoML on this text dataset processed by vision api or just use a regexp on this data or insert into a bigquery dataset and query on it, and so on...
1. model 3:
I would follow a similar approach, processing the images using the cloud vision API and generating a text dataset, but this time, the images that dont have any text on it, will generate a dataset with the "text" field empty {"gcs":"gs://path_to_image/image_2","text":[]}. Your own script can exclude the data with text and generate a dataset for the model 2, and a dataset for the model 1.
I see that your models 2 and 3 are not strictly classifications. Model 2 is a ocr problem, and them you process the output data. The model 3 is basically process your data and separate the proper datasets.
I hope this insight may help you.
I have 5 trained keras models with their weights saved.
To get predictions, I first create the model with same architecture and then load the saved weights.
Now, I wanna get predictions from all these models in django and return them as json response.
Where should I load the models so that they are loaded only when the server starts?
The answer depends on the type of data you are using, for example if it is Image classification task:
You need first to load your images using a simple HTML/js form
After
receiving the images, you need to pre-process them like you did when
you were trying to train your model. meaning: if the model expects
images on 224x224x3 input shape, the uploaded image need to be in
that shape, then convert the image into a numpy array img_to_array
Lastly, you need to pass this img_to_array to model.predict()
and get your results.
There are multiple blogs doing exactly this example
I have a very simple model in Django with 3 fields. The issue is that I have dozens of billions of data that needs to be stored in the table associated with that model in Postgresql database. I stored around 100 millions of rows, and then my server first got sluggish and then gave me "Nginx 505 bad gateway" when clicking on that table in Django admin or requesting data using API.
I was wondering what is the best way to handle this simple scenario of having massive data in Django. What's the best model. Should I split my model in order to split my table?
Thanks,
I would like to start off a pretrained FastText model such as these, and continue training on a different dataset and finally export back the trained model.
Is it possible ?
What you are looking for is Incremental Training. Link to documentation