predict_custom_model_sample(
"projects/794xxx496/locations/us-central1/xxxx/3452xxx524447744",
{ "instance_key_1": "value", ... },
{ "parameter_key_1": "value", ... }
)
Google is giving this example, I am not understanding the parameter_key and instance_key. To my understanding, I need to send the JSON instance.
{"instances": [ {"when": {"price": "1212"}}]}
How can I make it work with the predict_custom_model_sample?
I assume that you are trying this codelab.
Note that there seems to be a mismatch between the function name defined (predict_tabular_model) and the function name used (predict_custom_model_sample).
INSTANCES is an array of one or more JSON values of any type. Each values represents an instance that you are providing a prediction for.
Instant_key_1 is just the first key of the key/value that goes into instances.
Similarly, parameter_key_1 is just the first key of the key/value that goes into the parameters JSON object.
If your model uses a custom container, your input must be formatted as JSON, and there is an additional parameters field that can be used for your container.
PARAMETERS is a JSON object containing any parameters that your container requires to help serve predictions on the instances. AI Platform considers the parameters field optional, so you can design your container to require it, only use it when provided, or ignore it.
Ref.: https://cloud.google.com/ai-platform-unified/docs/predictions/custom-container-requirements#request_requirements
Here you have examples of inputs for online predictions from custom-trained models
For the codelab, I believe you can use the sample provided:
test_instance={
'Time': 80422,
'Amount': 17.99,
…
}
And then call for prediction (Remember to check for the function name in the notebook cell above)
predict_custom_model_sample(
"your-endpoint-str",
test_instance
)
Related
I'm deploying a SageMaker inference pipeline composed of two PyTorch models (model_1 and model_2), and I am wondering if it's possible to pass the same input to both the models composing the pipeline.
What I have in mind would work more or less as follows
Invoke the endpoint sending a binary encoded payload (namely payload_ser), for example:
client.invoke_endpoint(EndpointName=ENDPOINT,
ContentType='application/x-npy',
Body=payload_ser)
The first model parses the payload with inut_fn function, runs the predictor on it, and returns the output of the predictor. As a simplified example:
def input_fn(request_body, request_content_type):
if request_content_type == "application/x-npy":
input = some_function_to_parse_input(request_body)
return input
def predict_fn(input_object, predictor):
outputs = predictor(input_object)
return outputs
def output_fn(predictions, response_content_type):
return json.dumps(predictions)
The second model gets as payload both the original payload (payload_ser) and the output of the previous model (predictions). Possibly, the input_fn function would be used to parse the output of model_1 (as in the "standard case"), but I'd need some way to also make the original payload available to model_2. In this way, model_2 will use both the original payload and the output of model_1 to make the final prediction and return it to whoever invoked the endpoint.
Any idea if this is achievable?
Sounds like you need an inference DAG. Amazon SageMaker Inference pipelines currently supports only a chain of handlers, where the output of handler N is the input for handler N+1.
You could change model1's predict_fn() to return both (input_object, outputs), and output_fn(). output_fn() will receive these two objects as the predictions, and will handle serializing both as json. model2's input_fn() will need to know how to parse this pair input.
Consider implementing this as a generic pipeline handling mechanism that adds the input to the model's output. This way you could reuse it for all models and pipelines.
You could allow the model to be deployed as a standalone model, and as a part of a pipeline, and apply the relevant input/output handling behavior that will be triggered by the presence of an environment variable (Environment dict), which you can specify when creating the inference pipelines model.
Let say I have an app where I want to give someone the weather in a city.
The first scene has a prompt: "What city would you like the weather of?"
I then have to collect a slot/parameter called conv.param.city: and then use it in my node webhook which is:
const { conversation } = require('#assistant/conversation');
const functions = require('firebase-functions');
const app = conversation();
app.handle('schedule', (conv, {location}) => {
let temperature = callApi(location);// this part doesn't matter right now
**conv.add(`You want to know the weather in ${location}`);
conv.close(`The weather in ${location} is ${temperature}`);
});
exports.ActionsOnGoogleFulfillment = functions.https.onRequest(app);
From what I can tell, you can only take in parameters/slots that are predefined by types/intents. I cannot make a list of all cities that exist to train with. How can I basically say: Whatever the user says at this point, make that word into this variable.
How can i do this with the Google Actions SDK?
You can accomplish this by setting your intent parameter type to be free text (here's an example from one of the sample repos).
freeText: {}
If you apply this type to an intent parameter, you can use the training phrases to provide the necessary context on where in the phrase that "word" should be matched (example from the same repo).
I cannot make a list of all cities that exist to train with.
Another option exists if your API can return the set of locations supported. You can also use runtime type overrides to dynamically generate the type from the list of list of locations the API provides. This will be more accurate, but is dependent on what your data source looks like.
I have a situation where I would like to conditionally exclude a field from a query selection before I hit that query's resolver.
The use case being that my underlying API only exposes certain 'fields' based on the user's locale, and calls made to this API will throw errors if fields are requested that are not included of that locale.
I have tried an approach with directives,
type Person {
id: Int!
name: String!
medicare: String #locale(locales: ["AU"])
}
type query {
person(id: Int!): Person
}
And using the SchemaDirectiveVisitor.visitFieldDefinition, I override field.resolve for the medicare field to return null when the user locale doesn't match any of the locales defined on the directive.
However, when a client with a non "AU" locale executes the following
query {
person(id: 111) {
name
medicareNumber
}
}
}
the field resolver for medicare is never called and the query resolver makes a request to the underlying API, appending the fields in the selection set (including the invalid medicareNumber) as query parameters. The API call returns an error object at this point.
I believe this makes sense as it seems that the directive resolver is on the FieldDefinition and would only be called when the person resolver returns a valid result.
Is there a way to achieve this sort of functionality, with or without directives?
In general, I would caution against this kind of schema design. As a client, if I include a field in the selection set, I expect to see that field in the response -- removing the field from the selection set server-side goes against the spec and can cause unnecessary confusion (especially on a larger team or with a public API).
If you are examining the requested fields in order to determine the parameters to pass to your API call, then forcing a certain field to resolve to null won't do anything -- that field will still be included in the selection set. In fact, there's really no way to create a schema directive that will impact the selection set of a request.
The best approach here would be to 1) ensure any potentially-null fields are nullable in the schema and 2) explicitly filter the selection set wherever your selection-set-to-parameters logic is.
EDIT:
Schema directives won't show up as part of the schema object returned in the info, so they can't be used as flags. My suggestion would be to maintain a separate in-memory map. For example:
const fieldsByLocale = {
US: {
Person: ['name', 'medicareNumber'],
},
AU: {
Person: ['name'],
},
}
then you could just access the appropriate list to filter with fieldsByLocale[context.locale][info.returnType]. This filtering logic is specific to your data source (in this case, the external API), so this is a bit cleaner than "polluting" the schema with information that pertains to the storage layer. If the APIs change, or you switch to a different source for this information altogether (like a database), you can update the resolvers without touching your type definitions. In fact, this way, the filtering logic can easily live inside a domain/service layer instead of your resolvers.
"order (S)","method (NULL)","time (L)"
"/1553695740/Bar","true","[ { ""N"" : ""1556593200"" }, { ""N"" : ""1556859600"" }]"
"/1556439461/adasd","true","[ { ""N"" : ""1556593200"" }, { ""N"" : ""1556679600"" }]"
"/1556516482/Foobar","cheque","[ { ""N"" : ""1556766000"" }]"
How do I scan or query for that matter on empty "method" attribute values? https://s.natalian.org/2019-04-29/null.mp4
Unfortunately the DynamoDB console offers a simple GUI and assumes the operations you want to perform all have the same type. When you select filters on columns of type "NULL", it only allows you to do exists or not exists. This makes sense since a column containing only NULL datatypes can either exist or not exist.
What you have here is a column that contains multiple datatypes (since NULL is a different datatype than String). There are many ways to filter what you want here but I don't believe they are available to you on the console. Here is an example on how you could filter the dataset via the AWS CLI (note: since your column is a named a reserved word method, you will need to alias it with an expression attribute name):
Using Filter expressions
$ aws dynamodb scan --table-name plocal --filter-expression '#M = :null' --expression-attribute-values '{":null":{"NULL":true}}' --expression-attribute-names '{"#M":"method"}'
An option to consider to avoid this would be to update your logic to write some of sort filler string value instead of a null or empty string when writing your data to the database (i.e. "None" or "N/A"). Then you could solely operate on Strings and search on that value instead.
DynamoDB currently does not allow String values of an empty string and will give you errors if you try and put those items directly. To make this "easier", many of the SDKs have provided mappers/converters for objects to DyanmoDB items and this usually involves converting empty strings to Null types as a way of working around the rule of no empty strings.
If you need to differentiate between null and "", you will need to write some custom logic to marshall/unmarshall empty strings to a unique string value (i.e. "__EMPTY_STRING") when they are stored in DyanmoDB.
I'm pretty sure that there is no way to filter using the console. But I'm guessing that what you really want is to use such a filter in code.
DynamoDB has a very peculiar way of storing NULLs. There is a "NULL" data type which basically represents the concept of null values but it really is sort of like a boolean.
If you have the opportunity to change the data type of that attribute to be a string, or numeric, I strongly recommend doing so. Then you'll be able to create much more powerful queries with filter conditions to match what you want.
If the data already exists and you don't have a significant number of items that need to be updated, I recommend creating a new attribute to represent your data and backfilling.
Just following up on the comments. If you prefer using the mapper, you can customize how it marshals certain attributes that may be null/empty. Have a look at the go sdk encoder implementation for some examples: https://git.codingcafe.org/Mirrors/aws/aws-sdk-go/blob/9b5aaeba7a51edcf3f87bda525a08b04b90d2ef8/service/dynamodb/dynamodbattribute/encode.go
I was able to do this inside a FilterExpression:
attribute_type(MyProperty, :nullType) - Where :nullType is a string with value NULL. This one finds null entries.
attribute_type(MyProperty, :stringType) - Where :stringType is a string with value S. This one finds non-null entries.
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.OperatorsAndFunctions.html#Expressions.OperatorsAndFunctions.Syntax
In a certain scenario, I want to pass a field value(in string format) to the CouchDB and get associated doc (or only its id) which contains that particular string value in one its fields. In case, if no doc contains that particular field value, I would like CouchDB design functions to automatically create one and return the newly created doc.
I can accomplish this by making a GET request followed by a PUT request if there is no doc with that particular field value. Is there any way to get this done with just one POST request?
Design document functions (other than updates) cannot modify the data in any way.
So no, this is not possible.
You can write a list function to return you a new document if the results are empty, but it cannot save it automatically.