How to add custom metered usage items in IBM Marketplace (AppDirect) - marketplace

I am trying to do a full integration of a solution into IBM Marketplace. (The one using AppDirect). There are many metering items available (Users, MBs, ...) but I can use none of them. Let's say, for example, we use "Places". I have checked the option "Allow custom metered usage" but that won't allow me to add this "Places" metering item in my pricing option. How can I achieve this?
Note: IBM has discontinued it's Marketplace. Probably this question is of no use anymore but I decided not to delete it as we never know if they will enable it back. Also... before the discontinuation announce, I manage to get a reply from IBM stating that they don't allow the custom unit types and I was invited to use the generic "Item".

If you are billing a custom usage unit, the request looks like:
{
"account": {
"accountIdentifier": "{UUID}"
},
"items": [{
"quantity": 5,
"customUnit": "Places",
"price": 2.99,
"description": "some cool places"
}]
}
Custom units use a different field name than the predefined "units"- I'm not sure which error you were getting back when attempting to bill usage, but that might explain the error if you were getting back a dump of expected unit values.

Related

google cloud platform -- creating alert policy -- how to specify message variable in alerting documentation markdown?

So I've created a logging alert policy on google cloud that monitors the project's logs and sends an alert if it finds a log that matches a certain query. This is all good and fine, but whenever it does send an email alert, it's barebones. I am unable to include anything useful in the email alert such as the actual message, the user must instead click on "View incident" and go to the specified timeframe of when the alert happened.
Is there no way to include the message? As far as I can tell viewing the gcp Using Markdown and variables in documentation templates doc on this.
I'm only really able to use ${resource.label.x} which isn't really all that useful because it already includes most of that stuff by default in the alert.
Could I have something like ${jsonPayload.message}? It didn't work when I tried it.
Probably (!) not.
To be clear, the alerting policies track metrics (not logs) and you've created a log-based metric that you're using as the basis for an alert.
There's information loss between the underlying log (that contains e.g. jsonPayload) and the metric that's produced from it (which probably does not). You can create Log-based metrics labels using expressions that include the underlying log entry fields.
However, per the example in Google's docs, you'd want to consider a limited (enum) type for these values (e.g. HTTP status although that may be too broad too) rather than a potentially infinite jsonPayload.
It is possible. Suppose you need to pass "jsonPayload.message" present in your GCP log to documentation section in your policy. You need to use "label_extractor" feature to extract your log message.
I will share a policy creation JSON file template wherein you can pass "jsonPayload.message" in the documentation section in your policy.
policy_json = {
"display_name": "<policy_name>",
"documentation": {
"content": "I have the extracted the log message:${log.extracted_label.msg}",
"mime_type": "text/markdown"
},
"user_labels": {},
"conditions": [
{
"display_name": "<condition_name>",
"condition_matched_log": {
"filter": "<filter_condition>",
"label_extractors": {
"msg": "EXTRACT(jsonPayload.message)"
}
}
}
],
"alert_strategy": {
"notification_rate_limit": {
"period": "300s"
},
"auto_close": "604800s"
},
"combiner": "OR",
"enabled": True,
"notification_channels": [
"<notification_channel>"
]
}

Alexa Distribution Availability always points to all countries inspite of adding "isAvailableWorldwide": false

According to the alexa document the below values are set for skill distribution availability
Alexa Skill Manifest Link
"isAvailableWorldwide": false,
"distributionCountries": [
"US",
"CA",
"AU"
]
But still alexa always points to "all the countries" when ask-cli deployment is done.
Noticed this behaviour recently and earlier it was working fine , and nothing is changed in "skill.json"
Is anything missing in-order to make the distribution available only in specific countries?AlexaSkill->Distribution->Availability
Have to add distributionMode field in-order to select the distributionCountries after the ask-cli update.
"distributionMode": "PUBLIC"
I don't see anything wrong in the snippet you provided. In order to narrow down the problem I would edit the description in skill.json to force a change with the next deployment that you could check in the Alexa developer console. Based on the outcome you could check your tool chain or contact Alexa developer support if it's a bug.

AWS Personalize: Dumping User-item interaction Dataset Created By PutEvent

Following AWS Personalize documents, I successfully imported my datasets (User, Item, Interaction) from S3, created an EventTrcker, trained the model, and deployed the campaign. The solution works without any issue and I get the recommendations.
I rely on Putevent to add new user-item interaction events. I also dump those interaction events using Lambda+firehose in my s3. But I am wondering if AWS Personalize internally creates/augments the original user-item interaction dataset? How I can access and download the revised version of the dataset? I cannot see any new dataset in "Dataset groups > Datasets" rather than my original 3 datasets...
I prefer to dump it regularly from AWS Personalize to my S3 storage rather than using my own Lambda+Firehose solution.
This is the output of my Putevent call. I see 200...but not sure it works fine or not...should I see any new dataset in "Dataset groups > Datasets" created by putevents?
{
"ResponseMetadata": {
"RequestId": "a6c96496-cbd6-4ad8-9183-371d1794cbd8",
"HTTPStatusCode": 200,
"HTTPHeaders": {
"content-type": "application/json",
"date": "Mon, 04 Jan 2021 18:04:28 GMT",
"x-amzn-requestid": "a6c96496-cbd6-4ad8-9183-371d1794cbd8",
"content-length": "0",
"connection": "keep-alive"
},
"RetryAttempts": 0
}
}
Update: Now it's possible
AWS documentation:
https://docs.aws.amazon.com/personalize/latest/dg/export-data.html
You can use this AWS CLI command for exporting only interactions, that were added but PutEvents/PutUsers/PutItems API calls:
aws personalize create-dataset-export-job \
--job-name job name \
--dataset-arn dataset ARN \
--job-output "{\"s3DataDestination\":{\"kmsKeyArn\":\"kms key ARN\",\"path\":\"s3://bucket-name/folder-name/\"}}" \
--role-arn role ARN \
--ingestion-mode PUT
In that case --ingestion-mode PUT will make sure, that:
Specify PUT to export only data that you imported incrementally using the console or the PutEvents, PutUsers, or PutItems operations.
So I believe it covers your use case.
No, it's not possible
It's simply impossible right now to export this data.
There is no API to retrieve a dump of your Interactions dataset in Personalize.
I believe Lambda + Firehose workaround for this is correct approach.
But how to test, if PutEvents works?
To make sure, that Interactions added through PutEvents, you can make use of Filters feature:
https://docs.aws.amazon.com/personalize/latest/dg/filter-expressions.html
Pretty much create a new Filter, with similar expression:
EXCLUDE ItemID WHERE Interactions.EVENT_TYPE IN ("your_event_type_name")
Which will exclude from recommendations any item, that user previously interacted with.
Then you can test, if events added through PutEvents API are recognized correctly:
Create Filter expression as described above.
Create any campaign for simple recommendations (User-Personalization recipe).
Connect the filter to campaign.
Get recommendations for any user and save them somewhere.
Call PutEvents API with any of the recommended items, that was returned in 4 and user id from 4.
Again get recommendations for the same user as in 4.
If the item, that you did added with PutEvents call is no longer recommended, then you have a proof, that events added through PutEvents call are correctly added to Interactions dataset.
What if PutEvents call doesn't affect recommendations in that case?
Then simply you are providing incorrect values in API call. Personalize might return 200 response, even if event provided was invalid.
To fix that, try:
Make sure date is in correct format. Personalize might ignore events with very old timestamps, if there are much more newer events (it's possible to configure it in Solution config).
Check if you are not passing any strange values like "null" or "undefined" for sessionId, userId, trackingId in PutEvents params. It might cause ignoring the event by Personalize (https://github.com/aws/aws-sdk-js/issues/3371)
Make sure, you are passing correct eventType value (should match eventType in Solution and Filter).
If it still doesn't work, raise a support ticket to AWS with an example PutEvents API call params.
Are there any simpler solutions?
Well, maybe there are, but in our project we use this approach and it also tests, if filtering feature is working correctly. You will probably make use of Filtering anyways in the future, so I believe it's good enough method.

How to pass query parameters in API gateway with S3 json file backend

I am new to AWS and I have followed this tutorial : https://docs.aws.amazon.com/apigateway/latest/developerguide/integrating-api-with-aws-services-s3.html, I am now able to read from the TEST console my AWS object stored on s3 which is the following (it is .json file):
[
{
"important": "yes",
"name": "john",
"type": "male"
},
{
"important": "yes",
"name": "sarah",
"type": "female"
},
{
"important": "no",
"name": "maxim",
"type": "male"
}
]
Now, what I am trying to achieve is pass query parameters. I have added type in the Method Request and added a URL Query String Parameter named type with method.request.querystring.type mapping in the Integration Request.
When I want to test, typing type=male is not taken into account, I still get the 3 elements instead of the 2 male elements.
Any reasons you think this is happening ?
For information, the Resources is the following (and I am using AWS Service integration type to create the GET method as explained in the AWS tutorial)
/
/{folder}
/{item}
GET
In case anyone is interested by the answer, I have been able to solve my problem.
The full detailed solution requires a tutorial but here are the main steps. The difficulty lies in the many moving parts so it is important to test each of them independently to make progress (quite basic you will tell me).
Make sure your SQL query to your s3 DB is correct, for this you can go in your s3 bucket, click on your file and select "query with s3 select" from the action.
Make sure that your lambda function works, so check that you build and pass the correct SQL query from the test event
Setup the API query strings in the Method Request panel and setup the Mapping Template in the Integration Request panel (for me it looked like this "TypeL1":"$input.params('typeL1')") using the json content type
Good luck !

FHIR Organization with webservice urls

Is there an existing use case or resource that can be used to act as a webservice registry/directory in HL7 FHIR?
I'm wondering if I have to have an Other resource that holds values for the web service endpoints that I want to share for each Organization resource (which would be extended to have this Other Endpoint resource)
For example:
An organization can support XDR/XDS.b/NwHIN services and I want to include the endpoints for each service type, such as:
"endpoint": [
{
"name": "NwHIN Document Submission",
"url": "https//nwhinDSendpoint"
},
{
"name": "XDSB Provide and Register Document Set-B",
"url": "https//xdsbProvideAndRegister"
}
]
I would do these as extensions in the Organization resource:
{
"resourceType" : "Organization",
...
"extension" : [{
"url" : "http://joySmoth.org/fhir/StructureDefinition/nwhinDSendpoint",
"valueUri" : "https//nwhinDSendpoint"
}]
}
I agree with Grahame's answer above.
With these registries or endpoints there is often an Identifier that is critical too so would actually encourage the use of this extension being on a specific Identifier that is relevant to this.
The other consideration is that this may be more than just organizational, and could be more finely tuned down at the HealthcareService level (if you're using these) or even at the Practitioner.
Another complexity you may wish to consider are the security implications and requirements of this registry. Public Key management for digital certificates may be required if using this approach, maybe even indicating/including the supported root certs. (or public keys for encrypting content before submission)