AWS Kendra PreHook Lambdas for Data Enrichment - amazon-web-services

I am working on a POC using Kendra and Salesforce. The connector allows me to connect to my Salesforce Org and index knowledge articles. I have been able to set this up and it is currently working as expected.
There are a few custom fields and data points I want to bring over to help enrich the data even more. One of these is an additional answer / body that will contain key information for the searching.
This field in my data source is rich text containing HTML and is often larger than 2048 characters, a limit that seems to be imposed in a String data field within Kendra.
I came across two hooks that are built in for Pre and Post data enrichment. My thought here is that I can use the pre hook to strip HTML tags and truncate the field before it gets stored in the index.
Hook Reference: https://docs.aws.amazon.com/kendra/latest/dg/API_CustomDocumentEnrichmentConfiguration.html
Current Setup:
I have added a new field to the index called sf_answer_preview. I then mapped this field in the data source to the rich text field in the Salesforce org.
If I run this as is, it will index about 200 of the 1,000 articles and give an error that the remaining articles exceed the 2048 character limit in that field, hence why I am trying to set up the enrichment.
I set up the above enrichment on my data source. I specified a lambda to use in the pre-extraction, as well as no additional filtering, so run this on every article. I am not 100% certain what the S3 bucket is for since I am using a data source, but it appears to be needed so I have added that as well.
For my lambda, I create the following:
exports.handler = async (event) => {
// Debug
console.log(JSON.stringify(event))
// Vars
const s3Bucket = event.s3Bucket;
const s3ObjectKey = event.s3ObjectKey;
const meta = event.metadata;
// Answer
const answer = meta.attributes.find(o => o.name === 'sf_answer_preview');
// Remove HTML Tags
const removeTags = (str) => {
if ((str===null) || (str===''))
return false;
else
str = str.toString();
return str.replace( /(<([^>]+)>)/ig, '');
}
// Truncate
const truncate = (input) => input.length > 2000 ? `${input.substring(0, 2000)}...` : input;
let result = truncate(removeTags(answer.value.stringValue));
// Response
const response = {
"version" : "v0",
"s3ObjectKey": s3ObjectKey,
"metadataUpdates": [
{"name":"sf_answer_preview", "value":{"stringValue":result}}
]
}
// Debug
console.log(response)
// Response
return response
};
Based on the contract for the lambda described here, it appears pretty straight forward. I access the event, find the field in the data called sf_answer_preview (the rich text field from Salesforce) and I strip and truncate the value to 2,000 characters.
For the response, I am telling it to update that field to the new formatted answer so that it complies with the field limits.
When I log the data in the lambda, the pre-extraction event details are as follows:
{
"s3Bucket": "kendrasfdev",
"s3ObjectKey": "pre-extraction/********/22736e62-c65e-4334-af60-8c925ef62034/https://*********.my.salesforce.com/ka1d0000000wkgVAAQ",
"metadata": {
"attributes": [
{
"name": "_document_title",
"value": {
"stringValue": "What majors are under the Exploratory track of Health and Life Sciences?"
}
},
{
"name": "sf_answer_preview",
"value": {
"stringValue": "A complete list of majors affiliated with the Exploratory Health and Life Sciences track is available online. This track allows you to explore a variety of majors related to the health and life science professions. For more information, please visit the Exploratory program description. "
}
},
{
"name": "_data_source_sync_job_execution_id",
"value": {
"stringValue": "0fbfb959-7206-4151-a2b7-fce761a46241"
}
},
]
}
}
The Problem:
When this runs, I am still getting the same field limit error that the content exceeds the character limit. When I run the lambda on the raw data, it strips and truncates it as expected. I am thinking that the response in the lambda for some reason isn't setting the field value to the new content correctly and still trying to use the data directly from Salesforce, thus throwing the error.
Has anyone set up lambdas for Kendra before that might know what I am doing wrong? This seems pretty common to be able to do things like strip PII information before it gets indexed, so I must be slightly off on my setup somewhere.
Any thoughts?

since you are still passing the rich text as a metadata filed of a document, the character limit still applies so the document would fail at validation step of the API call and would not reach the enrichment step. A work around is to somehow append those rich text fields to the body of the document so that your lambda can access it there. But if those fields are auto generated for your documents from your data sources, that might not be easy.

Related

Creating Batch Operations with AWS Amplify [GraphQL, DataStore, AppSync]

I've currently been handling batch operations with a for loop, but obviously, this is not the best approach, especially as I'm adding an 'upload by CSV' option, which will take 1000+ putItems.
I searched around for the best ways to implement this, specifically this link:
https://docs.aws.amazon.com/appsync/latest/devguide/tutorial-dynamodb-batch.html
However, even after following those steps mentioned I'm not able to achieve a batch operation. Below is my code for a 'batch delete' operation.
Here is my schema.graphql file:
type Client #model #auth(rules: [{ allow: owner }]) {
id: ID!
name: String!
company: String
phone: String
email: String
}
type Mutation {
batchDelete(ids: [ID]): [Client]
}
I then create two new files. One request mapping template and one response mapping template.
#set($clientsdata = [])
#foreach($item in ${ctx.args.clients})
$util.qr($clientsdata.delete($util.dynamodb.toMapValues($item)))
#end
{
"version" : "2018-05-29",
"operation" : "BatchDeleteItem",
"tables" : {
"Clients": $utils.toJson($clientsdata)
}
}
and then as per the tutorial a "simple pass through" response mapping template:
$util.toJson($ctx.result.data.Posts)
However now when I run the batchdelete command, I keep getting nothing returned.
Would really appreciate guidance on this!
When it comes to performing DynamoDB batch operations in tandem with Amplify, note that the table name specified in the schema is actually different per environment, i.e. your "Client" table wouldn't be recognized as "Clients" as you have stated it in the request mapping template, but rather the name it is given on Amplify push, per environment.
E.g. Client-<some alphanumeric number>-envName
Add the full name of the table to your request and response mapping templates.
Also your foreach statement should read:
#foreach($item in ${ctx.args.clientsdata}) wherein you iterate through each of the items in the array that is passed as the argument to the context object.
Hope this helps.

Dealing With Incoming Null Values In Cloud Data Fusion When Building Data Pipeline

I have started trying out google cloud data fusion as a prospect ETL tool that I can finally decide to use.When building a data pipeline to fetch data from a REST API source and load it to a MySQL database am facing this error Expected a string but was NULL at line 1 column 221'. Please check the system logs for more details. and yes it's true I have a field that is null from the JSON response am seeing
"systemanswertime": null
How do I deal with null values since the available dropdown in the cloud data fusion studio string is not working are they other optional data types that I can use?
Below are two screenshots showing my current data pipeline structure
geneneral view
view showing mapping and the output schema
Thank You!!
What you need to do is to tell HTTP plugin that you are expecting a null by checking the null checkbox in front of output on the right side. See below example
You might be getting this error because in the JSON schema you are defining the value properties. You should allow systemanswertime parameter to be NULL.
You could try to parse the JSON value as follow:
"systemanswertime": {
"type": [
"string",
"null"
]
}
In the case you don't have access to the JSON file, you could try to use this plug in in order to enable the HTTP to manage nulleable values by dynamically substituting the configurations that can be served by the HTTP Server. You will need access to the HTTP endpoint in order construct an accessible HTTP endpoint that can serve content similar to:
{
"name" : "output.schema", "type" : "schema", "value" :
[
{ "name" : "id", "type" : "int", "nullable" : true},
{ "name" : "first_name", "type" : "string", "nullable" : true},
{ "name" : "last_name", "type" : "string", "nullable" : true},
{ "name" : "email", "type" : "string", "nullable" : true},
]
},
In case you are facing an error such as: No matching schema found for union type: ["string","null"], you could try the following workaround. The root cause of this errors are when the entries in the response from the API doesn't have all the fields it needs to have. For example, some entries may have callerId, channel, last_channel, last data, etc... but others entries may have not have last_channel or whatever other field from the JSON. This leads to a mismatch in the schema provided in the HTTP source and the pipeline fails right away.
As pear this when nodes encounter null values, logical errors, or other sources of errors, you may use an error handler plugin to catch errors. The way is as following:
In the HTTP source plug-in, change the following:
Output schema to account for custom field.
JSON/XML field mapping to account into custom field.
Changed Non-HTTP Error Handling field to Send to Error. This way it pushes the records through error collector and the pipeline proceeds with subsequent records.
Added Error Collector and a sink to capture the error records.
With this method you will be able to run the pipeline and had the problematic fields detected.
Kind regards,
Manuel

Does an additional resolver in AppSync/ DynamoDB bill twice for read operation?

I need to query after my users favorite posts. With this query they should be able to see the post with his information like title and likes. Considering that the value of some attributes like total likes can change over time, I can't add these values into the favorite dataset. If I would do that, I had to update every favorite dataset when someone likes a post to keep them up to date.This lead me to the decision that I don't add these informations (for example total likes) into a favorite entry and just attach an additional resolver for the post attribute. It could look like this:
type Post {
pid: ID!
title: String!
content: String!
likes: Int // <-- An attribute which I need to show while displaying favorites
}
type Favorite{
fid: ID!
pid: String!
...
post: Post! // <-- This has an own resolver
}
The attribute Post has his own additional resolver attached through AWS AppSync which could look like this
{
"version": "2018-05-29",
"operation": "GetItem",
"key": {
"id": $util.dynamodb.toDynamoDBJson($ctx.source.pid),
}
}
If a user would query his own favorites, would this cost twice of the read operation ? Since it needs two GetItem Operation for each in the result
Example:
Query favorites with limit 10 would cost 20 read ops (10 for plain Favorite data (fid and pid) and 10 for post data through the additional resolver.
Or does DynamoDB bundle it up into one read operation in the background ? Furthermore is this an acceptable solution (or a normal approach) for this problem ? Are you using this technique ? Or would this get very expensive while scaling ?
EDIT: Im using only one table, so every dataset is in it.

How to send GET request to API

Summary: I have a job board, a user searches a zip code and all the jobs matching that zip code are displayed, I am trying to add a feature that lets you see jobs within a certain mile radius of that zip code. There is a web API ( www.zipcodeapi.com ) that does these calculations and returns zip codes within the specified radius, I am just unsure how to use it.
Using www.zipcodeapi.com , you enter a zip code and a distance and it returns all zip codes within this distance. The format for API request is as follows: https://www.zipcodeapi.com/rest/<api_key>/radius.<format>/<zip_code>/<distance>/<units>, so if a user enters zip code '10566' and a distance of 5 miles, the format would be https://www.zipcodeapi.com/rest/<api_key>/radius.json/10566/5/miles and this would return:
{
"zip_codes": [
{
"zip_code": "10521",
"distance": 4.998,
"city": "Croton On Hudson",
"state": "NY"
},
{
"zip_code": "10548",
"distance": 3.137,
"city": "Montrose",
"state": "NY"
}
#etc...
]
}
My question is how do I send a GET request to the API using django?
I have the user searched zip code stored in zip = request.GET.get('zip') and the mile radius stored in mile_radius = request.GET['mile_radius']. How can I incorporate those two values in their respective spots in https://www.zipcodeapi.com/rest/<api_key>/radius.<format>/<zip_code>/<distance>/<units> and send the request? Can this be done with Django or do I have this all confused? Does it need to be done with a frontend language? I have tried to search this on google but only find this for RESTful APIS, and I dont think this is what I am looking for. Thanks in advance for any help, if you couldn't tell i've never worked with a web API before.
You can use the requests package, to do exactly what you want. It's pretty straightforward and has good documentation.
Here's an example of how you could perform it for your case:
zip_code = request.GET.get('zip')
mile_radius = request.GET['mile_radius']
api_key = YOUR_API_KEY
fmt = 'json'
units = 'miles'
response = requests.get(
url=f'https://www.zipcodeapi.com/rest/{api_key}/radius.{fmt}/{zip_code}/{mile_radius}/{units}')
zip_codes = response.json().get('zip_codes')
zip_codes should then be an array with those dicts as in your example.

How to parse mixed text and JSON log entries in AWS CloudWatch for Log Metric Filter

I am trying to parse log entries which are a mix of text and JSON. The first line is text representation and the next lines are JSON payload of the event. One of the possible examples are:
2016-07-24T21:08:07.888Z [INFO] Command completed lessonrecords-create
{
"key": "lessonrecords-create",
"correlationId": "c1c07081-3f67-4ab3-a5e2-1b3a16c87961",
"result": {
"id": "9457ce88-4e6f-4084-bbea-14fff78ce5b6",
"status": "NA",
"private": false,
"note": "Test note",
"time": "2016-02-01T01:24:00.000Z",
"updatedAt": "2016-07-24T21:08:07.879Z",
"createdAt": "2016-07-24T21:08:07.879Z",
"authorId": null,
"lessonId": null,
"groupId": null
}
}
For these records I try to define Log Metric Filter to a) match records b) select data or dimensions if possible.
According to the AWS docs JSON pattern should look like this:
{ $.key = "lessonrecords-create" }
however, it does not match anything. My guess is that because of mix text and JSON in a single log entry.
So, the questions are:
1. Is it possible to define a pattern that will match this log format?
2. Is it possible to extract dimensions, values from such a log format?
3. Help me with a pattern to do this.
If you set up the metric filter in the way that you have defined, the test will not register any matches (I have also had this issue), however when you deploy the metric filter it will still register matches (at least mine did). Just keep in mind that there is no way (as far as I am aware) to run this metric filter BACKWARDS (ie. it will only capture data from when it is created). [If you're trying to get stats on past data, you're better off using log insight queries]
I am currently experimenting with different parse statements to try and extract data (its also a mix of JSON and text), this thread MAY help you (it didn't for me) Amazon Cloudwatch Logs Insights with JSON fields .
UPDATE!
I have found a way to parse the text but its a little bit clunky. If you export your cloudwatch logs using a lamda function to SumoLogic, their search tool allows for MUCH better log manipulation and lets you parse JSON fields (if you treat the entire entry as text). SumoLogic is also really helpful because you can just extract your search results as a CSV. For my purposes, I parse the entire log message in SumoLogic, extract all the logs as a CSV and then I used regex in Python to filter through and extract the values I need.
Let's say you have the following log
2021-09-29 15:51:18,624 [main] DEBUG com.company.app.SparkResources - AUDIT : {"user":"Raspoutine","method":"GET","pathInfo":"/analysis/123"}
you can parse it like this to be able to handle the part after "AUDIT : " as a JSON
fields #message
| parse #message "* [*] * * - AUDIT : *" as timestamp, thread, logLevel, clazz, msg
| filter ispresent(msg)
| filter method = "GET" # You can use fields which are contained in the JSON String of 'msg' field. Do not use 'msg.method' but directly 'method'
The fields contained in your isolated / parsed JSON field are automatically added as fields usable in the query
You can use CloudWatch Events for such purpose(aka Subscription Filters), what you will need to do is define a cloudwatch Rule which uses an expression statement to match your logs.
Here, I will let you do all the reading:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/SubscriptionFilters.html
https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/Create-CloudWatch-Events-Scheduled-Rule.html
:)
Split the message into 3 fields and the 3rd field will be a valid json . I think in your case it would be
fields #timestamp, #message
| parse #message '[] * {"*"}' as field1, field2, field3
| limit 50
field3 is the valid json.
[INFO} will be the first field.
You can search JSON string representation, which is not as powerful.
For your example,
instead of { $.key = "lessonrecords-create" }
try "\"key\":\"lessonrecords-create\"".
This filter is not semantically identical to your requirement, though. It will also give events where key is not at the root of json.
you can use fluentd agent to send logs to Cloudwatch. Create custom grok pattern based on your metric filter.
Steps:
Install fluentd agent in your server
Install fluent-plugin-cloudwatch-logs plugin and fluent-plugin-grok-parser plugin
write your custom grok pattern based on your log format
please refer this blog for more information