Specify signature name on Vertex AI Predict - google-cloud-platform

I've deployed a tensorflow model in vertex AI platform using TFX Pipelines. The model have custom serving signatures but I'm strugling to specify the signature when I'm predicting.
I've the exact same model deployed in GCP AI Platform and I'm able to specify it.
According to the vertex documentation, we must pass a dictionary containing the Instances (List) and the Parameters (Dict) values.
I've submitted these arguments to this function:
instances: [{"argument_n": "value"}]
parameters: {"signature_name": "name_of_signature"}
Doesn't work, it still get the default signature of the model.
In GCP AI Platform, I've been able to predict directly specifying in the body of the request the signature name:
response = service.projects().predict(
name=name,
body={"instances": instances,
"signature_name": "name_of_signature"},
).execute()
#EDIT
I've discovered that with the rawPredict method from gcloud it works.
Here is an example:
!gcloud ai endpoints raw-predict {endpoint} --region=us-central1 \
--request='{"signature_name":"name_of_the_signature", \
"instances": [{"instance_0": ["value_0"], "instance_1": ["value_1"]}]}'
Unfortunately, looking at google api models code it only have the predict method, not the raw_predict. So I don't know if it's available through python sdk right now.

Vertex AI is a newer platform with limitations that will be improved over time. “signature_name” can be added to HTTP JSON Payload in RawPredictRequest or from gcloud as you have done but right now this is not available in regular predict requests.
Using HTTP JSON payload :
Example:
input.json :
{
"instances": [
["male", 29.8811345124283, 26.0, 1, "S", "New York, NY", 0, 0],
["female", 48.0, 39.6, 1, "C", "London / Paris", 0, 1]],
"signature_name": <string>
}
curl \
-X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json" \
https://us-central1-aiplatform.googleapis.com/v1/projects/${PROJECT_ID}/locations/us-central1/endpoints/${ENDPOINT_ID}:rawPredict \
-d "#input.json"

Related

AWS API Gateway : Signature Mismatch when using query params with AWS IAM Auth

I have an GET based API gateway set up pointing to a Lambda with Lambda Proxy integration enabled
The API has AWS IAM as the auth method.
On my local, I have AWS Auth setup with temp session token
The following works without issue
curl -s GET "https://<ID>.execute-api.us-west-2.amazonaws.com/dev" \
--header "x-amz-security-token: ${SESSION_TOKEN}" \
--user $ACCESS_KEY:$SECRET_KEY \
--aws-sigv4 "aws:amz:us-west-2:execute-api" | jq .
But when I add query params to the url, it fails
curl -s GET "https://<ID>.execute-api.us-west-2.amazonaws.com/dev?a=${v1}&b=${v2}" \
--header "x-amz-security-token: ${SESSION_TOKEN}" \
--user $ACCESS_KEY:$SECRET_KEY \
--aws-sigv4 "aws:amz:us-west-2:execute-api" | jq .
This is the response that I get is
{
"message": "The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details.\n\nThe Canonical String for this request should have been\n'GET\n/dev\nb=def&a=abc\nhost:<ID>.execute-api.us-west-2.amazonaws.com\nx-amz-date:20230104T112344Z\n\nhost;x-amz-date\<date-token>'\n\nThe String-to-Sign should have been\n'AWS4-HMAC-SHA256\n20230104T112344Z\n20230104/us-west-2/execute-api/aws4_request\<token>'\n"
}
Looks like I need to add the query params part to the signature part. How do I do that ? Or is there something else that I'm missing ?

Reset password by using cURL restAPIs in wso2 IS

In wso2-IS 5.11, trying Update Password Operation through cURL restAPI command.
I got reset password mail notification for below curl request,
$ curl -X POST -k -H "Authorization: Basic YWRxxxx=" -H "Content-Type: application/json" -d '{"user": {"username": "John","realm": "PRIMARY"},"properties": []}' "https://localhost:9445/api/identity/recovery/v0.9/recover-password?type=email&notify=true"
but instead of reset the password in reset-password-window , I need to pass the reset password values through cURL same like this, but this also requires Confirmation keycode with validity period and this is availlable in IDN_RecoveryDataTable as per this doc, so where this "IDN recovery data table" find out.? and shall I use same operation? or need to try some different use cases like , active and inactive user via cURL RestAPI ,and 3rd case Invalid password.
It seems you are following the blog [1] and referring to ii) Update Password step.
IDN_RECOVERY_DATA is a table of the identity database where WSO2IS stores information about the recovery flow. But you don't have to worry about the data stored in the database.
If you are just trying to change the password of a user without sending an email, you can use SCIM APIs.
Following is an example to update the password of a user when the existing password is provided.
curl -X PATCH 'https://localhost:9443/scim2/Me' \
-H 'accept: application/scim+json' \
-H 'Content-Type: application/scim+json' \
-H 'Authorization: Basic {base64(username:currentPassword)}' \
-d '{ "schemas": [ "urn:ietf:params:scim:api:messages:2.0:PatchOp" ], "Operations": [ { "op": "replace", "value": { "password": "newPassword" } } ]}'
If you want to send an email and void WSO2 UIs for creating the new password, you can change the email template to redirect the user to your UI once the user clicks the link in the email. Then use the confirmation code included in that link to invoke the password set-password API. The document [2] has information on the APIs related to the account recovery, available in WSO2 Identity Server 5.11.0.
For locking or disabling a user, you can use the SCIM APIs to patch the relevant user attributes.
[1] https://medium.com/#isurakarunaratne/password-recovery-wso2-identity-server-b80abe2bcc61
[2] https://docs.wso2.com/display/IS511/apidocs/account-recovery/

Receiving Invalid Grant Type Error Received From AWS Cognito When Supply Auth Code : How do I get Id and access tokens for testing?

I am unable to successfully acquire an id token/access token from my AWS cognito user pool when I supply an auth code. I have written a shell script (see below), and receive invalid_grant back from the server.
I have encoded the base64 Authorization Basic header for client_id:client_secret generated with python as:
import base64
encode='my_client_id_string:my_client_secret_string'
base64.b64encode(encode)
#!/usr/bin/env sh
curl --location --request POST 'https://<domain>.auth.us-east-2.amazoncognito.com/oauth2/token' \
--header 'Content-Type: application/x-www-form-urlencoded' \
--header 'Authorization: Basic <base64 encode string client_id:client_secret>' \
--data-urlencode 'grant_type=authorization_code' \
--data-urlencode 'client_id=<client_id from app settings' \
--data-urlencode 'code=<code received from redirect url to my localhost app endpoint>' \
--data-urlencode 'redirect_uri=http://localhost:8000/my_redirect'
Any ideas?
Solved it!
The problem was caused by an invalid client id. I had supplied a typo for the client id value!

Trained a text-classification model and want to pass a .csv file with text items to predict

I have already trained my data set in Google auto ml. Now I want to pass a CSV with text items to predict its labels. Not sure on how to proceed
Have passed 17k text items with labels.
Have seen rest API & python codes to execute.
export GOOGLE_APPLICATION_CREDENTIALS=key-file-path
curl -X POST \ -H "Authorization: Bearer $(gcloud auth application-default print-access-token)" \ -H "Content-Type: application/json" \ https://automl.googleapis.com/v1beta1/projects/ticket-classification-poc/locations/us-central1/models/TCN8669499774734365168:predict \ -d '{ "payload" : { "textSnippet": { "content": "YOUR TEXT HERE", "mime_type": "text/plain" }, } }' Output : Need to pass future text items in bulk for prediction
You need to have a loop to go through all the CSV items programatically and send one by one.

Does CMLE provides a REST API endpoint for Prediction?

Is there a way I can access a REST API endpoint for a Model created by Cloud ML Engine? I only see:
gcloud ml-engine jobs submit prediction $JOB_NAME \
--model census \
--version v1 \
--data-format TEXT \
--region $REGION \
--runtime-version 1.10 \
--input-paths gs://cloud-samples-data/ml-engine/testdata/prediction/census.json \
--output-path $GCS_JOB_DIR/predictions
Yes, in fact their are two APIs available to do this.
The projects.predict call is the simplest method. You pass in a request as described here, and it returns with the result. This cannot take input from GCS like your gsutil command.
The projects.jobs.create call with the predictionInput and predictionOutput fields allows batch prediction, with input from GCS.
The equivalent for your command is:
POST https://ml.googleapis.com/v1/projects/$PROJECT_ID/jobs
{
"jobId" : "$JOB_NAME",
"predictionInput": {
"dataFormat": "TEXT",
"inputPaths": "gs://cloud-samples-data/ml-engine/testdata/prediction/census.json",
"region": "REGION",
"runtimeVersion": "1.10",
"modelName": "projects/$PROJECT_ID/models/census"
},
"predictionOutput": {
"outputPath": "$GCS_JOB_DIR/predictions"
}
}
This returns immediately. use projects.jobs.get to check for success/failure.