I have built a machine learning model using Google's AutoML Tables interface. Once the model was trained, I exported it in a docker container to my local machine by following the steps detailed on Google's official documentation page: https://cloud.google.com/automl-tables/docs/model-export. Now, on my machine, it exists inside a Docker container, and I am able to run it successfully using the following command:
docker run -v exported_model:/models/default/0000001 -p 8080:8080 -it gcr.io/cloud-automl-tables-public/model_server
Once running as a local host via Docker, I am able to make predictions using the following python code:
import requests
import json
vector = [1, 1, 1, 1, 1, 2, 1]
input = {"instances": [{"column_1": vector[0],
"column_2": vector[1],
"column_3": vector[2],
"column_4": vector[3],
"column_5": vector[4],
"column_6": vector[5],
"column_7": vector[6]}]}
jsonData = json.dumps(input)
response = requests.post("http://localhost:8080/predict", jsonData)
print(response.json())
I need to publish this model as an API to be used by my client. I have considered AWS EC2 and Azure functions, however, I have not had any success so far. Ideally, I plan to use the FastAPI interface, but do not know how to do this in a dockerized context.
I have since solved the problem, however, it comes at a cost. It is possible to deploy an AutoML model from GCP, but it will incur a small charge over time. I don't think there is a way around this, otherwise Google would loose revenue.
Once the model is up and running, the following Python code can be used to make predictions:
from google.cloud import automl_v1beta1 as automl
project_id = 'chatbot-286a1'
compute_region = 'us-central1'
model_display_name = 'DNALC_4p_global_20201112093636'
inputs = {'sessionDuration': 60.0, 'createdStartDifference': 1206116.042162, 'confirmedStartDifference': 1206116.20255, 'createdConfirmedDifference': -0.160388}
client = automl.TablesClient(project=project_id, region=compute_region)
feature_importance = False
if feature_importance:
response = client.predict(
model_display_name=model_display_name,
inputs=inputs,
feature_importance=True,
)
else:
response = client.predict(
model_display_name=model_display_name, inputs=inputs
)
print("Prediction results:")
for result in response.payload:
print(
"Predicted class name: {}".format(result.tables.value)
)
print("Predicted class score: {}".format(result.tables.score))
break
In order for the code to work, the following resources may be helpful:
Installing AutoML in python:
How to install google.cloud automl_v1beta1 for python using anaconda?
Authenticating AutoML in python:
https://cloud.google.com/docs/authentication/production
(Remember to put the json file path to the authentication token as an environment variable - this is for security purposes.)
Related
I have been banging my head around this for a while and Google Cloud does not have a lot of documentation about this issue. What I am trying to do is deploy a custom ML model on Google Cloud Vertex by:
Uploading the model onto the Model Registry in Vertex AI
Create an endpoint
Deploying the uploaded model on the created endpoint.
Steps 1 and 2 are easy to implement, and I am not facing and issues. However step 3 is always failing for some reason. Even the logs don't give me a lot of information.
For Step 1:
This is Dockerfile I am using to create a custom image to serve my ML model:
FROM tiangolo/uvicorn-gunicorn-fastapi:python3.8-slim
COPY requirements-base.txt requirements.txt
RUN pip3 install --no-cache-dir -r requirements.txt
COPY serve.py serve.py
COPY model.pkl model.pkl
And this is what my serve.py file looks like:
from fastapi import Request, FastAPI, Response
import json
import catboost
import pickle
import os
app = FastAPI(title="Sentiment Analysis")
AIP_HEALTH_ROUTE = os.environ.get('AIP_HEALTH_ROUTE', '/health')
AIP_PREDICT_ROUTE = os.environ.get('AIP_PREDICT_ROUTE', '/predict')
#app.get(AIP_HEALTH_ROUTE, status_code=200)
async def health():
return {'health': 'ok'}
#app.post(AIP_PREDICT_ROUTE)
async def predict(request: Request):
with open('model.pkl', 'rb') as file:
model = pickle.load(file)
data = request.get_json()
input_data = data['input']
predictions = model.predict(input_data)
return json.dumps({'predictions': predictions.tolist()})
if __name__ == '__main__':
app.run(debug = True, host="0.0.0.0", port=8080)
After building the image, I push it to artifact registry on Google Cloud.
Is there an issue with how I have written the serve.py file or Dockerfile?
Or is there an easier way to deploy custom ML models on Google Cloud for MLOps and prediction purposes.
Well I tried a couple of manual approaches from the Google Cloud Vertex AI and also using gcloud commands.
In the manual process, after importing the model with the custom image I clicked on deploy to an end point. But this seems to always fail and takes forever.
Similarly using gcloud, I first create endpoint, then upload my model on to the registry, and the upload the model on the endpoint created. But this approach also fails.
At the end of the day I want my model to be successfully deployed on the endpoint and should give the right answers for predictions. Or, somehow host my custom ML model on Google Cloud and make predictions with it in a reasonable and manageable way!
I have deployed the Janusgraph using Helm in Google cloud Containers, following the below documentation:
https://cloud.google.com/architecture/running-janusgraph-with-bigtable,
I'm able to fire the gremline query using Google Cloud Shell.
Snapshot of GoogleCLoud Shell
Now I want to access the Janusgraph using Python, I tried below line of code but it's unable to connect to Janusgraph inside GCP container.
from gremlin_python import statics
from gremlin_python.structure.graph import Graph
from gremlin_python.process.graph_traversal import __
from gremlin_python.process.strategies import *
from gremlin_python.driver.driver_remote_connection import DriverRemoteConnection
graph = Graph()
g = graph.traversal().withRemote(DriverRemoteConnection('gs://127.0.0.1:8182/gremlin','g'))
value = g.V().has('name','hercules').values('age')
print(value)
here's the output I'm getting
[['V'], ['has', 'name', 'hercules'], ['values', 'age']]
Whereas the output should be -
30
Is there someone tried to access Janusgraph using Python inside GCP.
You need to end the query with a terminal step such as next or toList. What you are seeing is the query bytecode printed as the query was never submitted to the server due to the missing terminal step. So you need something like this:
value = g.V().has('name','hercules').values('age').next()
print(value)
Very new in Google Cloud Platform & hence asking basic question.
I am looking for an API which will be hosted in GCP. An External application will call the API to read data from BigQuery.
Can anyone help me out with any example Code/Approach?
Looking for an End-to-End cloud based solution based on Python
I can't provide you with a complete code example. But:
You can setup your python API using (Flask for example)
You can then use the python client to connect to BigQuery https://cloud.google.com/bigquery/docs/reference/libraries
Deploy your python API in Google App Engine, Cloud Run, Kubernetes, Compute, etc....
Do not forget to setup CORS and potential auth,
That's it
You can create a Python program using the Bigquery client, then deploy this program as a HTTP Cloud Function or Cloud Run service :
from flask import escape
from google.cloud import bigquery
import functions_framework
#functions_framework.http
def your_http_function(request):
#HTTP Cloud Function.
request_json = request.get_json(silent=True)
request_args = request.args
# example to retrieve argument param in the HTTP call
if request_json and 'name' in request_json:
name = request_json['name']
elif request_args and 'name' in request_args:
name = request_args['name']
# Construct a BigQuery client object.
client = bigquery.Client()
query = """
SELECT name, SUM(number) as total_people
FROM `bigquery-public-data.usa_names.usa_1910_2013`
WHERE state = 'TX'
GROUP BY name, state
ORDER BY total_people DESC
LIMIT 20
"""
query_job = client.query(query) # Make an API request.
rows = query_job.result() # Waits for query to finish
for row in rows:
print(row.name)
return rows
You have to deploy your Python code as a Cloud Function in this example
Your function can be invoked with a HTTP call with a param name :
https://GCP_REGION-PROJECT_ID.cloudfunctions.net/hello_http?name=NAME
You can also use Cloud Run that gives more flexibility because you deploy a Docker image.
I'm having trouble executing VertexAI's batch inference, despite endpoint deployment and inference working perfectly. My TensorFlow model has been trained in a custom Docker container with the following arguments:
aiplatform.CustomContainerTrainingJob(
display_name=display_name,
command=["python3", "train.py"],
container_uri=container_uri,
model_serving_container_image_uri=container_uri,
model_serving_container_environment_variables=env_vars,
model_serving_container_predict_route='/predict',
model_serving_container_health_route='/health',
model_serving_container_command=[
"gunicorn",
"src.inference:app",
"--bind",
"0.0.0.0:5000",
"-k",
"uvicorn.workers.UvicornWorker",
"-t",
"6000",
],
model_serving_container_ports=[5000],
)
I have a Flask endpoint defined for predict and health essentially defined below:
#app.get(f"/health")
def health_check_batch():
return 200
#app.post(f"/predict")
def predict_batch(request_body: dict):
pred_df = pd.DataFrame(request_body['instances'],
columns = request_body['parameters']['columns'])
# do some model inference things
return {"predictions": predictions.tolist()}
As described, when training a model and deploying to an endpoint, I can successfully hit the API with JSON schema like:
{"instances":[[1,2], [1,3]], "parameters":{"columns":["first", "second"]}}
This also works when using the endpoint Python SDK and feeding in instances/parameters as functional arguments.
However, I've tried performing batch inference with a CSV file and a JSONL file, and every time it fails with an Error Code 3. I can't find logs on why it failed in Logs Explorer either. I've read through all the documentation I could find and have seen other's successfully invoke batch inference, but haven't been able to find a guide. Does anyone have recommendations on batch file structure or the structure of my APIs? Thank you!
We have a model currently serving on Cloud ML. As a modification we added connections to datastore, which return 403, Insufficient privileges.
The mock code generating the error is:
from google.cloud import datastore
import datetime
# create & upload task
client = datastore.Client()
key = client.key('Task')
task = datastore.Entity(
key, exclude_from_indexes=['description'])
task.update({
'created': datetime.datetime.utcnow(),
'description': 'description',
'done': False
})
client.put(task)
# now list tasks
query = client.query(kind='Task')
query.order = ['created']
return list(query.fetch())
The next step would be adding credentials (service account) and exporting new path to GOOGLE_APPLICATION_DEFAULT parameter. However, since getting this account is difficult (company layering), I'd like to save time by asking the question.
Is the only way of communication with a NoSQL DB via Cloud Functions? Is that the common approach?
When you create your Model, you need custom prediction, and define a service account that has access to your resources.
gcloud components install beta
gcloud beta ai-platform versions create your-version-name \
--service-account your-service-account-name#your-project-id.iam.gserviceaccount.com
...